00:00:00.001 Started by upstream project "autotest-per-patch" build number 126171 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23926 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.134 Using shallow fetch with depth 1 00:00:00.134 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.134 > git --version # timeout=10 00:00:00.178 > git --version # 'git version 2.39.2' 00:00:00.178 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.217 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.217 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/22 # timeout=5 00:00:04.471 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.482 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.493 Checking out Revision 055051402f6bd793109ccc450ac2f885bb0fdaeb (FETCH_HEAD) 00:00:04.493 > git config core.sparsecheckout # timeout=10 00:00:04.505 > git read-tree -mu HEAD # timeout=10 00:00:04.523 > git checkout -f 055051402f6bd793109ccc450ac2f885bb0fdaeb # timeout=5 00:00:04.547 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch" 00:00:04.547 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.641 [Pipeline] Start of Pipeline 00:00:04.655 [Pipeline] library 00:00:04.656 Loading library shm_lib@master 00:00:04.656 Library shm_lib@master is cached. Copying from home. 00:00:04.671 [Pipeline] node 00:00:04.678 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.680 [Pipeline] { 00:00:04.689 [Pipeline] catchError 00:00:04.690 [Pipeline] { 00:00:04.703 [Pipeline] wrap 00:00:04.713 [Pipeline] { 00:00:04.723 [Pipeline] stage 00:00:04.725 [Pipeline] { (Prologue) 00:00:04.749 [Pipeline] echo 00:00:04.751 Node: VM-host-SM16 00:00:04.757 [Pipeline] cleanWs 00:00:04.822 [WS-CLEANUP] Deleting project workspace... 00:00:04.822 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.827 [WS-CLEANUP] done 00:00:05.030 [Pipeline] setCustomBuildProperty 00:00:05.119 [Pipeline] httpRequest 00:00:05.135 [Pipeline] echo 00:00:05.137 Sorcerer 10.211.164.101 is alive 00:00:05.144 [Pipeline] httpRequest 00:00:05.147 HttpMethod: GET 00:00:05.148 URL: http://10.211.164.101/packages/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:05.148 Sending request to url: http://10.211.164.101/packages/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:05.162 Response Code: HTTP/1.1 200 OK 00:00:05.162 Success: Status code 200 is in the accepted range: 200,404 00:00:05.163 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:07.379 [Pipeline] sh 00:00:07.653 + tar --no-same-owner -xf jbp_055051402f6bd793109ccc450ac2f885bb0fdaeb.tar.gz 00:00:07.670 [Pipeline] httpRequest 00:00:07.687 [Pipeline] echo 00:00:07.688 Sorcerer 10.211.164.101 is alive 00:00:07.694 [Pipeline] httpRequest 00:00:07.697 HttpMethod: GET 00:00:07.698 URL: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:07.698 Sending request to url: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:07.699 Response Code: HTTP/1.1 200 OK 00:00:07.700 Success: Status code 200 is in the accepted range: 200,404 00:00:07.700 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:26.419 [Pipeline] sh 00:00:26.697 + tar --no-same-owner -xf spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:29.991 [Pipeline] sh 00:00:30.272 + git -C spdk log --oneline -n5 00:00:30.272 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:30.272 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:30.272 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:30.272 719d03c6a sock/uring: only register net impl if supported 00:00:30.272 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:30.292 [Pipeline] writeFile 00:00:30.312 [Pipeline] sh 00:00:30.592 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.604 [Pipeline] sh 00:00:30.882 + cat autorun-spdk.conf 00:00:30.883 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.883 SPDK_TEST_NVMF=1 00:00:30.883 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.883 SPDK_TEST_URING=1 00:00:30.883 SPDK_TEST_USDT=1 00:00:30.883 SPDK_RUN_UBSAN=1 00:00:30.883 NET_TYPE=virt 00:00:30.883 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.914 RUN_NIGHTLY=0 00:00:30.917 [Pipeline] } 00:00:30.935 [Pipeline] // stage 00:00:30.954 [Pipeline] stage 00:00:30.956 [Pipeline] { (Run VM) 00:00:30.969 [Pipeline] sh 00:00:31.249 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.249 + echo 'Start stage prepare_nvme.sh' 00:00:31.249 Start stage prepare_nvme.sh 00:00:31.249 + [[ -n 5 ]] 00:00:31.249 + disk_prefix=ex5 00:00:31.249 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:31.249 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:31.249 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:31.249 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.249 ++ SPDK_TEST_NVMF=1 00:00:31.249 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.249 ++ SPDK_TEST_URING=1 00:00:31.249 ++ SPDK_TEST_USDT=1 00:00:31.249 ++ SPDK_RUN_UBSAN=1 00:00:31.249 ++ NET_TYPE=virt 00:00:31.249 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.249 ++ RUN_NIGHTLY=0 00:00:31.249 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:31.249 + nvme_files=() 00:00:31.249 + declare -A nvme_files 00:00:31.249 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.249 + nvme_files['nvme.img']=5G 00:00:31.249 + nvme_files['nvme-cmb.img']=5G 00:00:31.249 + nvme_files['nvme-multi0.img']=4G 00:00:31.249 + nvme_files['nvme-multi1.img']=4G 00:00:31.249 + nvme_files['nvme-multi2.img']=4G 00:00:31.249 + nvme_files['nvme-openstack.img']=8G 00:00:31.249 + nvme_files['nvme-zns.img']=5G 00:00:31.249 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.249 + (( SPDK_TEST_FTL == 1 )) 00:00:31.249 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.249 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.249 + for nvme in "${!nvme_files[@]}" 00:00:31.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:31.249 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.249 + for nvme in "${!nvme_files[@]}" 00:00:31.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:31.249 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.249 + for nvme in "${!nvme_files[@]}" 00:00:31.249 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:31.249 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.249 + for nvme in "${!nvme_files[@]}" 00:00:31.250 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:31.250 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.250 + for nvme in "${!nvme_files[@]}" 00:00:31.250 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:31.250 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.250 + for nvme in "${!nvme_files[@]}" 00:00:31.250 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:31.250 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.250 + for nvme in "${!nvme_files[@]}" 00:00:31.250 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:31.508 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.508 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:31.508 + echo 'End stage prepare_nvme.sh' 00:00:31.508 End stage prepare_nvme.sh 00:00:31.523 [Pipeline] sh 00:00:31.805 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:31.805 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:00:31.805 00:00:31.805 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:31.805 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:31.805 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:31.805 HELP=0 00:00:31.805 DRY_RUN=0 00:00:31.805 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:31.805 NVME_DISKS_TYPE=nvme,nvme, 00:00:31.805 NVME_AUTO_CREATE=0 00:00:31.805 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:31.805 NVME_CMB=,, 00:00:31.805 NVME_PMR=,, 00:00:31.805 NVME_ZNS=,, 00:00:31.805 NVME_MS=,, 00:00:31.805 NVME_FDP=,, 00:00:31.805 SPDK_VAGRANT_DISTRO=fedora38 00:00:31.805 SPDK_VAGRANT_VMCPU=10 00:00:31.805 SPDK_VAGRANT_VMRAM=12288 00:00:31.805 SPDK_VAGRANT_PROVIDER=libvirt 00:00:31.805 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:31.805 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:31.805 SPDK_OPENSTACK_NETWORK=0 00:00:31.805 VAGRANT_PACKAGE_BOX=0 00:00:31.805 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:31.805 FORCE_DISTRO=true 00:00:31.805 VAGRANT_BOX_VERSION= 00:00:31.805 EXTRA_VAGRANTFILES= 00:00:31.805 NIC_MODEL=e1000 00:00:31.806 00:00:31.806 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:31.806 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:35.090 Bringing machine 'default' up with 'libvirt' provider... 00:00:35.670 ==> default: Creating image (snapshot of base box volume). 00:00:35.933 ==> default: Creating domain with the following settings... 00:00:35.933 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721035589_521e0d95079449005849 00:00:35.933 ==> default: -- Domain type: kvm 00:00:35.933 ==> default: -- Cpus: 10 00:00:35.933 ==> default: -- Feature: acpi 00:00:35.933 ==> default: -- Feature: apic 00:00:35.933 ==> default: -- Feature: pae 00:00:35.933 ==> default: -- Memory: 12288M 00:00:35.933 ==> default: -- Memory Backing: hugepages: 00:00:35.933 ==> default: -- Management MAC: 00:00:35.933 ==> default: -- Loader: 00:00:35.933 ==> default: -- Nvram: 00:00:35.933 ==> default: -- Base box: spdk/fedora38 00:00:35.933 ==> default: -- Storage pool: default 00:00:35.933 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721035589_521e0d95079449005849.img (20G) 00:00:35.933 ==> default: -- Volume Cache: default 00:00:35.933 ==> default: -- Kernel: 00:00:35.933 ==> default: -- Initrd: 00:00:35.933 ==> default: -- Graphics Type: vnc 00:00:35.933 ==> default: -- Graphics Port: -1 00:00:35.933 ==> default: -- Graphics IP: 127.0.0.1 00:00:35.933 ==> default: -- Graphics Password: Not defined 00:00:35.933 ==> default: -- Video Type: cirrus 00:00:35.933 ==> default: -- Video VRAM: 9216 00:00:35.933 ==> default: -- Sound Type: 00:00:35.933 ==> default: -- Keymap: en-us 00:00:35.933 ==> default: -- TPM Path: 00:00:35.933 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:35.933 ==> default: -- Command line args: 00:00:35.933 ==> default: -> value=-device, 00:00:35.933 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:35.933 ==> default: -> value=-drive, 00:00:35.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:35.933 ==> default: -> value=-device, 00:00:35.933 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.933 ==> default: -> value=-device, 00:00:35.933 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:35.933 ==> default: -> value=-drive, 00:00:35.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:35.933 ==> default: -> value=-device, 00:00:35.933 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.933 ==> default: -> value=-drive, 00:00:35.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:35.933 ==> default: -> value=-device, 00:00:35.933 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.933 ==> default: -> value=-drive, 00:00:35.933 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:35.933 ==> default: -> value=-device, 00:00:35.933 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:35.933 ==> default: Creating shared folders metadata... 00:00:35.933 ==> default: Starting domain. 00:00:37.829 ==> default: Waiting for domain to get an IP address... 00:00:56.000 ==> default: Waiting for SSH to become available... 00:00:56.000 ==> default: Configuring and enabling network interfaces... 00:00:59.285 default: SSH address: 192.168.121.119:22 00:00:59.285 default: SSH username: vagrant 00:00:59.285 default: SSH auth method: private key 00:01:01.186 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.325 ==> default: Mounting SSHFS shared folder... 00:01:09.891 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:09.891 ==> default: Checking Mount.. 00:01:11.266 ==> default: Folder Successfully Mounted! 00:01:11.266 ==> default: Running provisioner: file... 00:01:12.200 default: ~/.gitconfig => .gitconfig 00:01:12.458 00:01:12.458 SUCCESS! 00:01:12.458 00:01:12.458 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:12.458 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:12.458 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:12.458 00:01:12.466 [Pipeline] } 00:01:12.483 [Pipeline] // stage 00:01:12.493 [Pipeline] dir 00:01:12.493 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:12.495 [Pipeline] { 00:01:12.508 [Pipeline] catchError 00:01:12.509 [Pipeline] { 00:01:12.520 [Pipeline] sh 00:01:12.793 + vagrant ssh-config --host vagrant 00:01:12.793 + sed -ne /^Host/,$p 00:01:12.793 + tee ssh_conf 00:01:16.976 Host vagrant 00:01:16.976 HostName 192.168.121.119 00:01:16.976 User vagrant 00:01:16.976 Port 22 00:01:16.976 UserKnownHostsFile /dev/null 00:01:16.976 StrictHostKeyChecking no 00:01:16.976 PasswordAuthentication no 00:01:16.976 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:16.976 IdentitiesOnly yes 00:01:16.976 LogLevel FATAL 00:01:16.976 ForwardAgent yes 00:01:16.976 ForwardX11 yes 00:01:16.976 00:01:16.990 [Pipeline] withEnv 00:01:16.992 [Pipeline] { 00:01:17.007 [Pipeline] sh 00:01:17.292 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.292 source /etc/os-release 00:01:17.292 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.292 # Minimal, systemd-like check. 00:01:17.292 if [[ -e /.dockerenv ]]; then 00:01:17.292 # Clear garbage from the node's name: 00:01:17.292 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.292 # $HOSTNAME is the actual container id 00:01:17.292 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.292 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.292 # We can assume this is a mount from a host where container is running, 00:01:17.292 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.292 container="$(< /etc/hostname) ($agent)" 00:01:17.292 else 00:01:17.292 # Fallback 00:01:17.292 container=$agent 00:01:17.292 fi 00:01:17.292 fi 00:01:17.292 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.292 00:01:17.304 [Pipeline] } 00:01:17.324 [Pipeline] // withEnv 00:01:17.332 [Pipeline] setCustomBuildProperty 00:01:17.347 [Pipeline] stage 00:01:17.349 [Pipeline] { (Tests) 00:01:17.367 [Pipeline] sh 00:01:17.644 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.658 [Pipeline] sh 00:01:17.951 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:17.969 [Pipeline] timeout 00:01:17.969 Timeout set to expire in 30 min 00:01:17.971 [Pipeline] { 00:01:17.988 [Pipeline] sh 00:01:18.266 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:18.832 HEAD is now at e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:18.845 [Pipeline] sh 00:01:19.123 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:19.393 [Pipeline] sh 00:01:19.667 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:19.683 [Pipeline] sh 00:01:19.961 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:19.961 ++ readlink -f spdk_repo 00:01:19.961 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:19.961 + [[ -n /home/vagrant/spdk_repo ]] 00:01:19.961 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:19.961 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:19.961 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:19.961 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:19.961 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:19.961 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:19.961 + cd /home/vagrant/spdk_repo 00:01:19.961 + source /etc/os-release 00:01:19.961 ++ NAME='Fedora Linux' 00:01:19.961 ++ VERSION='38 (Cloud Edition)' 00:01:19.961 ++ ID=fedora 00:01:19.961 ++ VERSION_ID=38 00:01:19.961 ++ VERSION_CODENAME= 00:01:19.961 ++ PLATFORM_ID=platform:f38 00:01:19.961 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.961 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.961 ++ LOGO=fedora-logo-icon 00:01:19.961 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.961 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.961 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.961 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.961 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.961 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.961 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.961 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.961 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.961 ++ SUPPORT_END=2024-05-14 00:01:19.961 ++ VARIANT='Cloud Edition' 00:01:19.961 ++ VARIANT_ID=cloud 00:01:19.961 + uname -a 00:01:20.219 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.219 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:20.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:20.477 Hugepages 00:01:20.477 node hugesize free / total 00:01:20.477 node0 1048576kB 0 / 0 00:01:20.477 node0 2048kB 0 / 0 00:01:20.477 00:01:20.477 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:20.477 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:20.736 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:20.736 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:20.736 + rm -f /tmp/spdk-ld-path 00:01:20.736 + source autorun-spdk.conf 00:01:20.736 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.736 ++ SPDK_TEST_NVMF=1 00:01:20.736 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.736 ++ SPDK_TEST_URING=1 00:01:20.736 ++ SPDK_TEST_USDT=1 00:01:20.736 ++ SPDK_RUN_UBSAN=1 00:01:20.736 ++ NET_TYPE=virt 00:01:20.736 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.736 ++ RUN_NIGHTLY=0 00:01:20.736 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.736 + [[ -n '' ]] 00:01:20.736 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:20.736 + for M in /var/spdk/build-*-manifest.txt 00:01:20.736 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.736 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.736 + for M in /var/spdk/build-*-manifest.txt 00:01:20.736 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.736 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.736 ++ uname 00:01:20.736 + [[ Linux == \L\i\n\u\x ]] 00:01:20.736 + sudo dmesg -T 00:01:20.736 + sudo dmesg --clear 00:01:20.736 + dmesg_pid=5261 00:01:20.736 + sudo dmesg -Tw 00:01:20.736 + [[ Fedora Linux == FreeBSD ]] 00:01:20.736 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.736 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.736 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.736 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.736 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.736 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.736 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.736 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.736 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.736 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.736 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.736 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.736 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.736 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.736 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.736 Test configuration: 00:01:20.736 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.736 SPDK_TEST_NVMF=1 00:01:20.736 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.736 SPDK_TEST_URING=1 00:01:20.736 SPDK_TEST_USDT=1 00:01:20.736 SPDK_RUN_UBSAN=1 00:01:20.736 NET_TYPE=virt 00:01:20.736 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.736 RUN_NIGHTLY=0 09:27:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:20.736 09:27:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.736 09:27:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.736 09:27:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.736 09:27:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.736 09:27:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.736 09:27:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.736 09:27:15 -- paths/export.sh@5 -- $ export PATH 00:01:20.736 09:27:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.736 09:27:15 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:20.736 09:27:15 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:20.736 09:27:15 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721035635.XXXXXX 00:01:20.736 09:27:15 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721035635.wQrZUU 00:01:20.736 09:27:15 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:20.736 09:27:15 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:20.736 09:27:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:20.736 09:27:15 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:20.736 09:27:15 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.736 09:27:15 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:20.736 09:27:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:20.736 09:27:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.995 09:27:15 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:20.995 09:27:15 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:20.995 09:27:15 -- pm/common@17 -- $ local monitor 00:01:20.995 09:27:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.995 09:27:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.995 09:27:15 -- pm/common@25 -- $ sleep 1 00:01:20.995 09:27:15 -- pm/common@21 -- $ date +%s 00:01:20.995 09:27:15 -- pm/common@21 -- $ date +%s 00:01:20.995 09:27:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721035635 00:01:20.995 09:27:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721035635 00:01:20.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721035635_collect-vmstat.pm.log 00:01:20.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721035635_collect-cpu-load.pm.log 00:01:21.942 09:27:16 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:21.942 09:27:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.942 09:27:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.942 09:27:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.942 09:27:16 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.942 Mon Jul 15 09:27:16 AM UTC 2024 00:01:21.942 09:27:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.942 v24.09-pre-205-ge7cce062d 00:01:21.942 09:27:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.942 09:27:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.942 09:27:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.942 09:27:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:21.942 09:27:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.942 09:27:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.942 ************************************ 00:01:21.942 START TEST ubsan 00:01:21.942 ************************************ 00:01:21.942 using ubsan 00:01:21.942 09:27:16 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:21.942 00:01:21.942 real 0m0.000s 00:01:21.942 user 0m0.000s 00:01:21.942 sys 0m0.000s 00:01:21.943 09:27:16 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:21.943 09:27:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.943 ************************************ 00:01:21.943 END TEST ubsan 00:01:21.943 ************************************ 00:01:21.943 09:27:16 -- common/autotest_common.sh@1142 -- $ return 0 00:01:21.943 09:27:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.943 09:27:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.943 09:27:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.943 09:27:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.943 09:27:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.943 09:27:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.943 09:27:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.943 09:27:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.943 09:27:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:21.943 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.943 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:22.509 Using 'verbs' RDMA provider 00:01:35.655 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:50.522 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:50.522 Creating mk/config.mk...done. 00:01:50.522 Creating mk/cc.flags.mk...done. 00:01:50.522 Type 'make' to build. 00:01:50.522 09:27:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:50.522 09:27:43 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:50.522 09:27:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:50.522 09:27:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.522 ************************************ 00:01:50.522 START TEST make 00:01:50.522 ************************************ 00:01:50.522 09:27:43 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:50.522 make[1]: Nothing to be done for 'all'. 00:02:00.492 The Meson build system 00:02:00.492 Version: 1.3.1 00:02:00.492 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:00.492 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:00.492 Build type: native build 00:02:00.492 Program cat found: YES (/usr/bin/cat) 00:02:00.492 Project name: DPDK 00:02:00.492 Project version: 24.03.0 00:02:00.492 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.492 C linker for the host machine: cc ld.bfd 2.39-16 00:02:00.492 Host machine cpu family: x86_64 00:02:00.492 Host machine cpu: x86_64 00:02:00.492 Message: ## Building in Developer Mode ## 00:02:00.492 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.492 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.492 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.492 Program python3 found: YES (/usr/bin/python3) 00:02:00.492 Program cat found: YES (/usr/bin/cat) 00:02:00.492 Compiler for C supports arguments -march=native: YES 00:02:00.492 Checking for size of "void *" : 8 00:02:00.492 Checking for size of "void *" : 8 (cached) 00:02:00.492 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:00.492 Library m found: YES 00:02:00.492 Library numa found: YES 00:02:00.492 Has header "numaif.h" : YES 00:02:00.492 Library fdt found: NO 00:02:00.492 Library execinfo found: NO 00:02:00.493 Has header "execinfo.h" : YES 00:02:00.493 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.493 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.493 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.493 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.493 Run-time dependency openssl found: YES 3.0.9 00:02:00.493 Run-time dependency libpcap found: YES 1.10.4 00:02:00.493 Has header "pcap.h" with dependency libpcap: YES 00:02:00.493 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.493 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.493 Compiler for C supports arguments -Wformat: YES 00:02:00.493 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.493 Compiler for C supports arguments -Wformat-security: NO 00:02:00.493 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.493 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.493 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.493 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.493 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.493 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.493 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.493 Compiler for C supports arguments -Wundef: YES 00:02:00.493 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.493 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.493 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.493 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.493 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.493 Program objdump found: YES (/usr/bin/objdump) 00:02:00.493 Compiler for C supports arguments -mavx512f: YES 00:02:00.493 Checking if "AVX512 checking" compiles: YES 00:02:00.493 Fetching value of define "__SSE4_2__" : 1 00:02:00.493 Fetching value of define "__AES__" : 1 00:02:00.493 Fetching value of define "__AVX__" : 1 00:02:00.493 Fetching value of define "__AVX2__" : 1 00:02:00.493 Fetching value of define "__AVX512BW__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512CD__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512F__" : (undefined) 00:02:00.493 Fetching value of define "__AVX512VL__" : (undefined) 00:02:00.493 Fetching value of define "__PCLMUL__" : 1 00:02:00.493 Fetching value of define "__RDRND__" : 1 00:02:00.493 Fetching value of define "__RDSEED__" : 1 00:02:00.493 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.493 Fetching value of define "__znver1__" : (undefined) 00:02:00.493 Fetching value of define "__znver2__" : (undefined) 00:02:00.493 Fetching value of define "__znver3__" : (undefined) 00:02:00.493 Fetching value of define "__znver4__" : (undefined) 00:02:00.493 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.493 Message: lib/log: Defining dependency "log" 00:02:00.493 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.493 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.493 Checking for function "getentropy" : NO 00:02:00.493 Message: lib/eal: Defining dependency "eal" 00:02:00.493 Message: lib/ring: Defining dependency "ring" 00:02:00.493 Message: lib/rcu: Defining dependency "rcu" 00:02:00.493 Message: lib/mempool: Defining dependency "mempool" 00:02:00.493 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.493 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.493 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.493 Compiler for C supports arguments -mpclmul: YES 00:02:00.493 Compiler for C supports arguments -maes: YES 00:02:00.493 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.493 Compiler for C supports arguments -mavx512bw: YES 00:02:00.493 Compiler for C supports arguments -mavx512dq: YES 00:02:00.493 Compiler for C supports arguments -mavx512vl: YES 00:02:00.493 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.493 Compiler for C supports arguments -mavx2: YES 00:02:00.493 Compiler for C supports arguments -mavx: YES 00:02:00.493 Message: lib/net: Defining dependency "net" 00:02:00.493 Message: lib/meter: Defining dependency "meter" 00:02:00.493 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.493 Message: lib/pci: Defining dependency "pci" 00:02:00.493 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.493 Message: lib/hash: Defining dependency "hash" 00:02:00.493 Message: lib/timer: Defining dependency "timer" 00:02:00.493 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.493 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.493 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.493 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.493 Message: lib/power: Defining dependency "power" 00:02:00.493 Message: lib/reorder: Defining dependency "reorder" 00:02:00.493 Message: lib/security: Defining dependency "security" 00:02:00.493 Has header "linux/userfaultfd.h" : YES 00:02:00.493 Has header "linux/vduse.h" : YES 00:02:00.493 Message: lib/vhost: Defining dependency "vhost" 00:02:00.493 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.493 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.493 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.493 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.493 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.493 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.493 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.493 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.493 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.493 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.493 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.493 Configuring doxy-api-html.conf using configuration 00:02:00.493 Configuring doxy-api-man.conf using configuration 00:02:00.493 Program mandb found: YES (/usr/bin/mandb) 00:02:00.493 Program sphinx-build found: NO 00:02:00.493 Configuring rte_build_config.h using configuration 00:02:00.493 Message: 00:02:00.493 ================= 00:02:00.493 Applications Enabled 00:02:00.493 ================= 00:02:00.493 00:02:00.493 apps: 00:02:00.493 00:02:00.493 00:02:00.493 Message: 00:02:00.493 ================= 00:02:00.493 Libraries Enabled 00:02:00.493 ================= 00:02:00.493 00:02:00.493 libs: 00:02:00.493 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.493 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.493 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.493 00:02:00.493 Message: 00:02:00.493 =============== 00:02:00.493 Drivers Enabled 00:02:00.493 =============== 00:02:00.493 00:02:00.493 common: 00:02:00.493 00:02:00.493 bus: 00:02:00.493 pci, vdev, 00:02:00.493 mempool: 00:02:00.493 ring, 00:02:00.493 dma: 00:02:00.493 00:02:00.493 net: 00:02:00.493 00:02:00.493 crypto: 00:02:00.493 00:02:00.493 compress: 00:02:00.493 00:02:00.493 vdpa: 00:02:00.493 00:02:00.493 00:02:00.493 Message: 00:02:00.493 ================= 00:02:00.493 Content Skipped 00:02:00.493 ================= 00:02:00.493 00:02:00.493 apps: 00:02:00.493 dumpcap: explicitly disabled via build config 00:02:00.493 graph: explicitly disabled via build config 00:02:00.493 pdump: explicitly disabled via build config 00:02:00.493 proc-info: explicitly disabled via build config 00:02:00.493 test-acl: explicitly disabled via build config 00:02:00.493 test-bbdev: explicitly disabled via build config 00:02:00.493 test-cmdline: explicitly disabled via build config 00:02:00.493 test-compress-perf: explicitly disabled via build config 00:02:00.493 test-crypto-perf: explicitly disabled via build config 00:02:00.493 test-dma-perf: explicitly disabled via build config 00:02:00.493 test-eventdev: explicitly disabled via build config 00:02:00.493 test-fib: explicitly disabled via build config 00:02:00.493 test-flow-perf: explicitly disabled via build config 00:02:00.493 test-gpudev: explicitly disabled via build config 00:02:00.493 test-mldev: explicitly disabled via build config 00:02:00.493 test-pipeline: explicitly disabled via build config 00:02:00.493 test-pmd: explicitly disabled via build config 00:02:00.493 test-regex: explicitly disabled via build config 00:02:00.493 test-sad: explicitly disabled via build config 00:02:00.493 test-security-perf: explicitly disabled via build config 00:02:00.493 00:02:00.493 libs: 00:02:00.493 argparse: explicitly disabled via build config 00:02:00.493 metrics: explicitly disabled via build config 00:02:00.493 acl: explicitly disabled via build config 00:02:00.493 bbdev: explicitly disabled via build config 00:02:00.493 bitratestats: explicitly disabled via build config 00:02:00.493 bpf: explicitly disabled via build config 00:02:00.493 cfgfile: explicitly disabled via build config 00:02:00.493 distributor: explicitly disabled via build config 00:02:00.493 efd: explicitly disabled via build config 00:02:00.493 eventdev: explicitly disabled via build config 00:02:00.493 dispatcher: explicitly disabled via build config 00:02:00.493 gpudev: explicitly disabled via build config 00:02:00.493 gro: explicitly disabled via build config 00:02:00.493 gso: explicitly disabled via build config 00:02:00.493 ip_frag: explicitly disabled via build config 00:02:00.493 jobstats: explicitly disabled via build config 00:02:00.493 latencystats: explicitly disabled via build config 00:02:00.493 lpm: explicitly disabled via build config 00:02:00.493 member: explicitly disabled via build config 00:02:00.493 pcapng: explicitly disabled via build config 00:02:00.493 rawdev: explicitly disabled via build config 00:02:00.493 regexdev: explicitly disabled via build config 00:02:00.493 mldev: explicitly disabled via build config 00:02:00.493 rib: explicitly disabled via build config 00:02:00.493 sched: explicitly disabled via build config 00:02:00.493 stack: explicitly disabled via build config 00:02:00.493 ipsec: explicitly disabled via build config 00:02:00.493 pdcp: explicitly disabled via build config 00:02:00.493 fib: explicitly disabled via build config 00:02:00.493 port: explicitly disabled via build config 00:02:00.493 pdump: explicitly disabled via build config 00:02:00.493 table: explicitly disabled via build config 00:02:00.493 pipeline: explicitly disabled via build config 00:02:00.494 graph: explicitly disabled via build config 00:02:00.494 node: explicitly disabled via build config 00:02:00.494 00:02:00.494 drivers: 00:02:00.494 common/cpt: not in enabled drivers build config 00:02:00.494 common/dpaax: not in enabled drivers build config 00:02:00.494 common/iavf: not in enabled drivers build config 00:02:00.494 common/idpf: not in enabled drivers build config 00:02:00.494 common/ionic: not in enabled drivers build config 00:02:00.494 common/mvep: not in enabled drivers build config 00:02:00.494 common/octeontx: not in enabled drivers build config 00:02:00.494 bus/auxiliary: not in enabled drivers build config 00:02:00.494 bus/cdx: not in enabled drivers build config 00:02:00.494 bus/dpaa: not in enabled drivers build config 00:02:00.494 bus/fslmc: not in enabled drivers build config 00:02:00.494 bus/ifpga: not in enabled drivers build config 00:02:00.494 bus/platform: not in enabled drivers build config 00:02:00.494 bus/uacce: not in enabled drivers build config 00:02:00.494 bus/vmbus: not in enabled drivers build config 00:02:00.494 common/cnxk: not in enabled drivers build config 00:02:00.494 common/mlx5: not in enabled drivers build config 00:02:00.494 common/nfp: not in enabled drivers build config 00:02:00.494 common/nitrox: not in enabled drivers build config 00:02:00.494 common/qat: not in enabled drivers build config 00:02:00.494 common/sfc_efx: not in enabled drivers build config 00:02:00.494 mempool/bucket: not in enabled drivers build config 00:02:00.494 mempool/cnxk: not in enabled drivers build config 00:02:00.494 mempool/dpaa: not in enabled drivers build config 00:02:00.494 mempool/dpaa2: not in enabled drivers build config 00:02:00.494 mempool/octeontx: not in enabled drivers build config 00:02:00.494 mempool/stack: not in enabled drivers build config 00:02:00.494 dma/cnxk: not in enabled drivers build config 00:02:00.494 dma/dpaa: not in enabled drivers build config 00:02:00.494 dma/dpaa2: not in enabled drivers build config 00:02:00.494 dma/hisilicon: not in enabled drivers build config 00:02:00.494 dma/idxd: not in enabled drivers build config 00:02:00.494 dma/ioat: not in enabled drivers build config 00:02:00.494 dma/skeleton: not in enabled drivers build config 00:02:00.494 net/af_packet: not in enabled drivers build config 00:02:00.494 net/af_xdp: not in enabled drivers build config 00:02:00.494 net/ark: not in enabled drivers build config 00:02:00.494 net/atlantic: not in enabled drivers build config 00:02:00.494 net/avp: not in enabled drivers build config 00:02:00.494 net/axgbe: not in enabled drivers build config 00:02:00.494 net/bnx2x: not in enabled drivers build config 00:02:00.494 net/bnxt: not in enabled drivers build config 00:02:00.494 net/bonding: not in enabled drivers build config 00:02:00.494 net/cnxk: not in enabled drivers build config 00:02:00.494 net/cpfl: not in enabled drivers build config 00:02:00.494 net/cxgbe: not in enabled drivers build config 00:02:00.494 net/dpaa: not in enabled drivers build config 00:02:00.494 net/dpaa2: not in enabled drivers build config 00:02:00.494 net/e1000: not in enabled drivers build config 00:02:00.494 net/ena: not in enabled drivers build config 00:02:00.494 net/enetc: not in enabled drivers build config 00:02:00.494 net/enetfec: not in enabled drivers build config 00:02:00.494 net/enic: not in enabled drivers build config 00:02:00.494 net/failsafe: not in enabled drivers build config 00:02:00.494 net/fm10k: not in enabled drivers build config 00:02:00.494 net/gve: not in enabled drivers build config 00:02:00.494 net/hinic: not in enabled drivers build config 00:02:00.494 net/hns3: not in enabled drivers build config 00:02:00.494 net/i40e: not in enabled drivers build config 00:02:00.494 net/iavf: not in enabled drivers build config 00:02:00.494 net/ice: not in enabled drivers build config 00:02:00.494 net/idpf: not in enabled drivers build config 00:02:00.494 net/igc: not in enabled drivers build config 00:02:00.494 net/ionic: not in enabled drivers build config 00:02:00.494 net/ipn3ke: not in enabled drivers build config 00:02:00.494 net/ixgbe: not in enabled drivers build config 00:02:00.494 net/mana: not in enabled drivers build config 00:02:00.494 net/memif: not in enabled drivers build config 00:02:00.494 net/mlx4: not in enabled drivers build config 00:02:00.494 net/mlx5: not in enabled drivers build config 00:02:00.494 net/mvneta: not in enabled drivers build config 00:02:00.494 net/mvpp2: not in enabled drivers build config 00:02:00.494 net/netvsc: not in enabled drivers build config 00:02:00.494 net/nfb: not in enabled drivers build config 00:02:00.494 net/nfp: not in enabled drivers build config 00:02:00.494 net/ngbe: not in enabled drivers build config 00:02:00.494 net/null: not in enabled drivers build config 00:02:00.494 net/octeontx: not in enabled drivers build config 00:02:00.494 net/octeon_ep: not in enabled drivers build config 00:02:00.494 net/pcap: not in enabled drivers build config 00:02:00.494 net/pfe: not in enabled drivers build config 00:02:00.494 net/qede: not in enabled drivers build config 00:02:00.494 net/ring: not in enabled drivers build config 00:02:00.494 net/sfc: not in enabled drivers build config 00:02:00.494 net/softnic: not in enabled drivers build config 00:02:00.494 net/tap: not in enabled drivers build config 00:02:00.494 net/thunderx: not in enabled drivers build config 00:02:00.494 net/txgbe: not in enabled drivers build config 00:02:00.494 net/vdev_netvsc: not in enabled drivers build config 00:02:00.494 net/vhost: not in enabled drivers build config 00:02:00.494 net/virtio: not in enabled drivers build config 00:02:00.494 net/vmxnet3: not in enabled drivers build config 00:02:00.494 raw/*: missing internal dependency, "rawdev" 00:02:00.494 crypto/armv8: not in enabled drivers build config 00:02:00.494 crypto/bcmfs: not in enabled drivers build config 00:02:00.494 crypto/caam_jr: not in enabled drivers build config 00:02:00.494 crypto/ccp: not in enabled drivers build config 00:02:00.494 crypto/cnxk: not in enabled drivers build config 00:02:00.494 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.494 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.494 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.494 crypto/mlx5: not in enabled drivers build config 00:02:00.494 crypto/mvsam: not in enabled drivers build config 00:02:00.494 crypto/nitrox: not in enabled drivers build config 00:02:00.494 crypto/null: not in enabled drivers build config 00:02:00.494 crypto/octeontx: not in enabled drivers build config 00:02:00.494 crypto/openssl: not in enabled drivers build config 00:02:00.494 crypto/scheduler: not in enabled drivers build config 00:02:00.494 crypto/uadk: not in enabled drivers build config 00:02:00.494 crypto/virtio: not in enabled drivers build config 00:02:00.494 compress/isal: not in enabled drivers build config 00:02:00.494 compress/mlx5: not in enabled drivers build config 00:02:00.494 compress/nitrox: not in enabled drivers build config 00:02:00.494 compress/octeontx: not in enabled drivers build config 00:02:00.494 compress/zlib: not in enabled drivers build config 00:02:00.494 regex/*: missing internal dependency, "regexdev" 00:02:00.495 ml/*: missing internal dependency, "mldev" 00:02:00.495 vdpa/ifc: not in enabled drivers build config 00:02:00.495 vdpa/mlx5: not in enabled drivers build config 00:02:00.495 vdpa/nfp: not in enabled drivers build config 00:02:00.495 vdpa/sfc: not in enabled drivers build config 00:02:00.495 event/*: missing internal dependency, "eventdev" 00:02:00.495 baseband/*: missing internal dependency, "bbdev" 00:02:00.495 gpu/*: missing internal dependency, "gpudev" 00:02:00.495 00:02:00.495 00:02:00.495 Build targets in project: 85 00:02:00.495 00:02:00.495 DPDK 24.03.0 00:02:00.495 00:02:00.495 User defined options 00:02:00.495 buildtype : debug 00:02:00.495 default_library : shared 00:02:00.495 libdir : lib 00:02:00.495 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:00.495 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:00.495 c_link_args : 00:02:00.495 cpu_instruction_set: native 00:02:00.495 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:00.495 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:00.495 enable_docs : false 00:02:00.495 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.495 enable_kmods : false 00:02:00.495 max_lcores : 128 00:02:00.495 tests : false 00:02:00.495 00:02:00.495 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.495 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:00.495 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.495 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.495 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.495 [4/268] Linking static target lib/librte_kvargs.a 00:02:00.495 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.495 [6/268] Linking static target lib/librte_log.a 00:02:00.495 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.495 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.495 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.495 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.495 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.495 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.495 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.495 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.753 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.753 [16/268] Linking static target lib/librte_telemetry.a 00:02:00.753 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.753 [18/268] Linking target lib/librte_log.so.24.1 00:02:00.753 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.753 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.012 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.012 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.270 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.270 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.270 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.270 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.528 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.528 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.528 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.528 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.528 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.528 [32/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.528 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.786 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:01.786 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.044 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.044 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.302 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.302 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.302 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.302 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.560 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.560 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.560 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.560 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.560 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.560 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.894 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.894 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.178 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.178 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.178 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.436 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.436 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.436 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.436 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.694 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.694 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.694 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.951 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.951 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.209 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:04.209 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:04.209 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:04.467 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:04.467 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:04.467 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:04.467 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:04.467 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:04.725 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:04.725 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:04.725 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.982 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.982 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:04.982 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.982 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.240 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.240 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.498 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.498 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.498 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.498 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.756 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.756 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.756 [85/268] Linking static target lib/librte_eal.a 00:02:06.013 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.013 [87/268] Linking static target lib/librte_ring.a 00:02:06.013 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.271 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.271 [90/268] Linking static target lib/librte_rcu.a 00:02:06.271 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.271 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.529 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.529 [94/268] Linking static target lib/librte_mempool.a 00:02:06.529 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.529 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.529 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.787 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.788 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.046 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:07.046 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:07.046 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:07.046 [103/268] Linking static target lib/librte_mbuf.a 00:02:07.304 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:07.562 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:07.562 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:07.562 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.562 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:07.562 [109/268] Linking static target lib/librte_net.a 00:02:07.820 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.820 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:07.820 [112/268] Linking static target lib/librte_meter.a 00:02:08.079 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.337 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.337 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.337 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.337 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.594 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.594 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.851 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.851 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.142 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.142 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.401 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.401 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.401 [126/268] Linking static target lib/librte_pci.a 00:02:09.401 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.401 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.659 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.659 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.659 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.659 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.659 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.659 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.659 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:09.916 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.916 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.916 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.916 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.916 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.916 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.916 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.916 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.916 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:10.173 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:10.173 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.429 [147/268] Linking static target lib/librte_ethdev.a 00:02:10.429 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.429 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.429 [150/268] Linking static target lib/librte_cmdline.a 00:02:10.429 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:10.685 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.685 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.685 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.685 [155/268] Linking static target lib/librte_timer.a 00:02:10.943 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.943 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.944 [158/268] Linking static target lib/librte_hash.a 00:02:10.944 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.201 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.459 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.459 [162/268] Linking static target lib/librte_compressdev.a 00:02:11.459 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.459 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.717 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.717 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.717 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.717 [168/268] Linking static target lib/librte_dmadev.a 00:02:11.717 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.974 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.974 [171/268] Linking static target lib/librte_cryptodev.a 00:02:11.974 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.233 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.233 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.233 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.233 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.233 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.552 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.552 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.826 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:12.826 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.826 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:12.826 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.826 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.084 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.084 [186/268] Linking static target lib/librte_power.a 00:02:13.084 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.084 [188/268] Linking static target lib/librte_reorder.a 00:02:13.342 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.342 [190/268] Linking static target lib/librte_security.a 00:02:13.342 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.342 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.342 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.910 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.910 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.169 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.169 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.169 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.428 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:14.428 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.428 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.428 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:14.995 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:14.995 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:14.995 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:14.995 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:14.995 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:14.995 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:14.995 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.254 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.254 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.254 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.254 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.254 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.254 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.254 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:15.513 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.513 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.513 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.513 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.513 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.513 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.772 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.772 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.772 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.772 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.772 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.772 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.338 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:16.338 [230/268] Linking static target lib/librte_vhost.a 00:02:16.902 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.159 [232/268] Linking target lib/librte_eal.so.24.1 00:02:17.159 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.159 [234/268] Linking target lib/librte_timer.so.24.1 00:02:17.159 [235/268] Linking target lib/librte_ring.so.24.1 00:02:17.159 [236/268] Linking target lib/librte_pci.so.24.1 00:02:17.159 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:17.416 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.416 [239/268] Linking target lib/librte_meter.so.24.1 00:02:17.416 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.416 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.416 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.416 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.416 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.416 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:17.416 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:17.416 [247/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.673 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.673 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.673 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.673 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:17.673 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.673 [253/268] Linking target lib/librte_net.so.24.1 00:02:17.932 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:17.932 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:17.932 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:17.932 [257/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.932 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:17.932 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:17.932 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:17.932 [261/268] Linking target lib/librte_hash.so.24.1 00:02:17.932 [262/268] Linking target lib/librte_security.so.24.1 00:02:18.189 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.189 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.189 [265/268] Linking target lib/librte_ethdev.so.24.1 00:02:18.447 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.447 [267/268] Linking target lib/librte_power.so.24.1 00:02:18.447 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.447 INFO: autodetecting backend as ninja 00:02:18.447 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:19.819 CC lib/ut/ut.o 00:02:19.819 CC lib/ut_mock/mock.o 00:02:19.819 CC lib/log/log_flags.o 00:02:19.819 CC lib/log/log.o 00:02:19.819 CC lib/log/log_deprecated.o 00:02:19.819 LIB libspdk_ut_mock.a 00:02:19.819 LIB libspdk_ut.a 00:02:19.819 LIB libspdk_log.a 00:02:19.819 SO libspdk_ut_mock.so.6.0 00:02:19.819 SO libspdk_ut.so.2.0 00:02:19.819 SO libspdk_log.so.7.0 00:02:19.819 SYMLINK libspdk_ut_mock.so 00:02:19.819 SYMLINK libspdk_ut.so 00:02:19.819 SYMLINK libspdk_log.so 00:02:20.078 CXX lib/trace_parser/trace.o 00:02:20.078 CC lib/util/base64.o 00:02:20.078 CC lib/ioat/ioat.o 00:02:20.078 CC lib/util/bit_array.o 00:02:20.078 CC lib/util/cpuset.o 00:02:20.078 CC lib/util/crc16.o 00:02:20.078 CC lib/util/crc32.o 00:02:20.078 CC lib/dma/dma.o 00:02:20.078 CC lib/util/crc32c.o 00:02:20.335 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.335 CC lib/util/crc32_ieee.o 00:02:20.335 CC lib/util/crc64.o 00:02:20.335 CC lib/util/dif.o 00:02:20.335 CC lib/util/fd.o 00:02:20.335 LIB libspdk_dma.a 00:02:20.335 CC lib/vfio_user/host/vfio_user.o 00:02:20.335 CC lib/util/file.o 00:02:20.335 SO libspdk_dma.so.4.0 00:02:20.335 LIB libspdk_ioat.a 00:02:20.335 CC lib/util/hexlify.o 00:02:20.335 SO libspdk_ioat.so.7.0 00:02:20.593 SYMLINK libspdk_dma.so 00:02:20.593 CC lib/util/iov.o 00:02:20.593 CC lib/util/math.o 00:02:20.593 CC lib/util/pipe.o 00:02:20.593 SYMLINK libspdk_ioat.so 00:02:20.593 CC lib/util/strerror_tls.o 00:02:20.593 CC lib/util/string.o 00:02:20.593 CC lib/util/uuid.o 00:02:20.593 LIB libspdk_vfio_user.a 00:02:20.593 CC lib/util/fd_group.o 00:02:20.593 SO libspdk_vfio_user.so.5.0 00:02:20.593 CC lib/util/xor.o 00:02:20.593 CC lib/util/zipf.o 00:02:20.593 SYMLINK libspdk_vfio_user.so 00:02:20.851 LIB libspdk_util.a 00:02:20.851 SO libspdk_util.so.9.1 00:02:21.108 LIB libspdk_trace_parser.a 00:02:21.108 SYMLINK libspdk_util.so 00:02:21.108 SO libspdk_trace_parser.so.5.0 00:02:21.108 SYMLINK libspdk_trace_parser.so 00:02:21.366 CC lib/rdma_utils/rdma_utils.o 00:02:21.366 CC lib/idxd/idxd_user.o 00:02:21.366 CC lib/idxd/idxd.o 00:02:21.366 CC lib/rdma_provider/common.o 00:02:21.366 CC lib/idxd/idxd_kernel.o 00:02:21.366 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:21.366 CC lib/vmd/vmd.o 00:02:21.366 CC lib/conf/conf.o 00:02:21.366 CC lib/json/json_parse.o 00:02:21.366 CC lib/env_dpdk/env.o 00:02:21.366 CC lib/env_dpdk/memory.o 00:02:21.366 CC lib/env_dpdk/pci.o 00:02:21.366 LIB libspdk_rdma_provider.a 00:02:21.623 SO libspdk_rdma_provider.so.6.0 00:02:21.623 CC lib/json/json_util.o 00:02:21.623 LIB libspdk_conf.a 00:02:21.623 CC lib/json/json_write.o 00:02:21.623 LIB libspdk_rdma_utils.a 00:02:21.623 SO libspdk_conf.so.6.0 00:02:21.623 SYMLINK libspdk_rdma_provider.so 00:02:21.623 SO libspdk_rdma_utils.so.1.0 00:02:21.623 CC lib/vmd/led.o 00:02:21.623 SYMLINK libspdk_conf.so 00:02:21.623 CC lib/env_dpdk/init.o 00:02:21.623 SYMLINK libspdk_rdma_utils.so 00:02:21.623 CC lib/env_dpdk/threads.o 00:02:21.623 CC lib/env_dpdk/pci_ioat.o 00:02:21.881 CC lib/env_dpdk/pci_virtio.o 00:02:21.881 CC lib/env_dpdk/pci_vmd.o 00:02:21.881 CC lib/env_dpdk/pci_idxd.o 00:02:21.881 LIB libspdk_json.a 00:02:21.881 LIB libspdk_idxd.a 00:02:21.881 SO libspdk_json.so.6.0 00:02:21.881 CC lib/env_dpdk/pci_event.o 00:02:21.881 SO libspdk_idxd.so.12.0 00:02:21.881 SYMLINK libspdk_json.so 00:02:21.881 CC lib/env_dpdk/sigbus_handler.o 00:02:21.881 CC lib/env_dpdk/pci_dpdk.o 00:02:21.881 SYMLINK libspdk_idxd.so 00:02:21.881 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.881 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.881 LIB libspdk_vmd.a 00:02:22.144 SO libspdk_vmd.so.6.0 00:02:22.144 SYMLINK libspdk_vmd.so 00:02:22.144 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:22.145 CC lib/jsonrpc/jsonrpc_server.o 00:02:22.145 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.145 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.405 LIB libspdk_jsonrpc.a 00:02:22.405 SO libspdk_jsonrpc.so.6.0 00:02:22.663 SYMLINK libspdk_jsonrpc.so 00:02:22.663 LIB libspdk_env_dpdk.a 00:02:22.663 SO libspdk_env_dpdk.so.14.1 00:02:22.920 CC lib/rpc/rpc.o 00:02:22.920 SYMLINK libspdk_env_dpdk.so 00:02:23.177 LIB libspdk_rpc.a 00:02:23.177 SO libspdk_rpc.so.6.0 00:02:23.177 SYMLINK libspdk_rpc.so 00:02:23.436 CC lib/keyring/keyring.o 00:02:23.436 CC lib/keyring/keyring_rpc.o 00:02:23.436 CC lib/notify/notify.o 00:02:23.436 CC lib/notify/notify_rpc.o 00:02:23.436 CC lib/trace/trace.o 00:02:23.436 CC lib/trace/trace_flags.o 00:02:23.436 CC lib/trace/trace_rpc.o 00:02:23.698 LIB libspdk_notify.a 00:02:23.698 LIB libspdk_keyring.a 00:02:23.698 SO libspdk_notify.so.6.0 00:02:23.698 SO libspdk_keyring.so.1.0 00:02:23.698 SYMLINK libspdk_notify.so 00:02:23.698 SYMLINK libspdk_keyring.so 00:02:23.698 LIB libspdk_trace.a 00:02:23.698 SO libspdk_trace.so.10.0 00:02:23.956 SYMLINK libspdk_trace.so 00:02:24.213 CC lib/sock/sock.o 00:02:24.213 CC lib/sock/sock_rpc.o 00:02:24.213 CC lib/thread/iobuf.o 00:02:24.213 CC lib/thread/thread.o 00:02:24.471 LIB libspdk_sock.a 00:02:24.471 SO libspdk_sock.so.10.0 00:02:24.729 SYMLINK libspdk_sock.so 00:02:24.987 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.987 CC lib/nvme/nvme_ctrlr.o 00:02:24.987 CC lib/nvme/nvme_fabric.o 00:02:24.987 CC lib/nvme/nvme_ns.o 00:02:24.987 CC lib/nvme/nvme_ns_cmd.o 00:02:24.987 CC lib/nvme/nvme_pcie_common.o 00:02:24.987 CC lib/nvme/nvme_pcie.o 00:02:24.987 CC lib/nvme/nvme.o 00:02:24.987 CC lib/nvme/nvme_qpair.o 00:02:25.552 LIB libspdk_thread.a 00:02:25.552 CC lib/nvme/nvme_quirks.o 00:02:25.810 SO libspdk_thread.so.10.1 00:02:25.810 CC lib/nvme/nvme_transport.o 00:02:25.810 SYMLINK libspdk_thread.so 00:02:25.810 CC lib/nvme/nvme_discovery.o 00:02:25.810 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.810 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.810 CC lib/accel/accel.o 00:02:25.810 CC lib/nvme/nvme_tcp.o 00:02:26.067 CC lib/nvme/nvme_opal.o 00:02:26.067 CC lib/nvme/nvme_io_msg.o 00:02:26.325 CC lib/nvme/nvme_poll_group.o 00:02:26.582 CC lib/nvme/nvme_zns.o 00:02:26.582 CC lib/nvme/nvme_stubs.o 00:02:26.582 CC lib/nvme/nvme_auth.o 00:02:26.582 CC lib/nvme/nvme_cuse.o 00:02:26.582 CC lib/blob/blobstore.o 00:02:26.840 CC lib/blob/request.o 00:02:26.840 CC lib/accel/accel_rpc.o 00:02:26.840 CC lib/nvme/nvme_rdma.o 00:02:27.097 CC lib/accel/accel_sw.o 00:02:27.097 CC lib/blob/zeroes.o 00:02:27.097 CC lib/blob/blob_bs_dev.o 00:02:27.355 CC lib/init/json_config.o 00:02:27.355 CC lib/init/subsystem.o 00:02:27.355 CC lib/init/subsystem_rpc.o 00:02:27.355 LIB libspdk_accel.a 00:02:27.355 SO libspdk_accel.so.15.1 00:02:27.355 CC lib/init/rpc.o 00:02:27.355 CC lib/virtio/virtio.o 00:02:27.355 CC lib/virtio/virtio_vhost_user.o 00:02:27.355 CC lib/virtio/virtio_vfio_user.o 00:02:27.355 CC lib/virtio/virtio_pci.o 00:02:27.355 SYMLINK libspdk_accel.so 00:02:27.654 LIB libspdk_init.a 00:02:27.654 SO libspdk_init.so.5.0 00:02:27.654 CC lib/bdev/bdev_rpc.o 00:02:27.654 CC lib/bdev/bdev.o 00:02:27.654 CC lib/bdev/bdev_zone.o 00:02:27.654 CC lib/bdev/part.o 00:02:27.654 SYMLINK libspdk_init.so 00:02:27.654 CC lib/bdev/scsi_nvme.o 00:02:27.914 LIB libspdk_virtio.a 00:02:27.914 SO libspdk_virtio.so.7.0 00:02:27.914 SYMLINK libspdk_virtio.so 00:02:27.914 CC lib/event/app.o 00:02:27.914 CC lib/event/reactor.o 00:02:27.914 CC lib/event/app_rpc.o 00:02:27.914 CC lib/event/log_rpc.o 00:02:27.914 CC lib/event/scheduler_static.o 00:02:28.172 LIB libspdk_nvme.a 00:02:28.430 LIB libspdk_event.a 00:02:28.430 SO libspdk_event.so.14.0 00:02:28.430 SO libspdk_nvme.so.13.1 00:02:28.430 SYMLINK libspdk_event.so 00:02:28.687 SYMLINK libspdk_nvme.so 00:02:29.617 LIB libspdk_blob.a 00:02:29.617 SO libspdk_blob.so.11.0 00:02:29.617 SYMLINK libspdk_blob.so 00:02:29.874 CC lib/blobfs/blobfs.o 00:02:29.874 CC lib/blobfs/tree.o 00:02:29.874 CC lib/lvol/lvol.o 00:02:30.435 LIB libspdk_bdev.a 00:02:30.435 SO libspdk_bdev.so.15.1 00:02:30.435 SYMLINK libspdk_bdev.so 00:02:30.691 CC lib/ublk/ublk.o 00:02:30.691 CC lib/ublk/ublk_rpc.o 00:02:30.691 CC lib/nbd/nbd.o 00:02:30.691 CC lib/nbd/nbd_rpc.o 00:02:30.691 CC lib/nvmf/ctrlr.o 00:02:30.691 CC lib/ftl/ftl_core.o 00:02:30.691 CC lib/ftl/ftl_init.o 00:02:30.691 CC lib/scsi/dev.o 00:02:30.947 LIB libspdk_blobfs.a 00:02:30.947 SO libspdk_blobfs.so.10.0 00:02:30.947 LIB libspdk_lvol.a 00:02:30.947 CC lib/scsi/lun.o 00:02:30.947 SYMLINK libspdk_blobfs.so 00:02:30.947 CC lib/scsi/port.o 00:02:30.947 SO libspdk_lvol.so.10.0 00:02:30.947 CC lib/ftl/ftl_layout.o 00:02:30.947 SYMLINK libspdk_lvol.so 00:02:30.947 CC lib/ftl/ftl_debug.o 00:02:30.947 CC lib/scsi/scsi.o 00:02:30.947 CC lib/scsi/scsi_bdev.o 00:02:31.203 CC lib/scsi/scsi_pr.o 00:02:31.203 CC lib/scsi/scsi_rpc.o 00:02:31.203 CC lib/scsi/task.o 00:02:31.203 CC lib/nvmf/ctrlr_discovery.o 00:02:31.203 CC lib/ftl/ftl_io.o 00:02:31.203 LIB libspdk_nbd.a 00:02:31.203 CC lib/nvmf/ctrlr_bdev.o 00:02:31.506 SO libspdk_nbd.so.7.0 00:02:31.506 CC lib/nvmf/subsystem.o 00:02:31.506 CC lib/nvmf/nvmf.o 00:02:31.506 SYMLINK libspdk_nbd.so 00:02:31.506 CC lib/nvmf/nvmf_rpc.o 00:02:31.506 CC lib/nvmf/transport.o 00:02:31.506 LIB libspdk_ublk.a 00:02:31.506 SO libspdk_ublk.so.3.0 00:02:31.506 CC lib/ftl/ftl_sb.o 00:02:31.506 LIB libspdk_scsi.a 00:02:31.506 SYMLINK libspdk_ublk.so 00:02:31.506 CC lib/nvmf/tcp.o 00:02:31.762 SO libspdk_scsi.so.9.0 00:02:31.762 CC lib/ftl/ftl_l2p.o 00:02:31.762 CC lib/ftl/ftl_l2p_flat.o 00:02:31.762 SYMLINK libspdk_scsi.so 00:02:31.762 CC lib/ftl/ftl_nv_cache.o 00:02:32.019 CC lib/nvmf/stubs.o 00:02:32.019 CC lib/nvmf/mdns_server.o 00:02:32.019 CC lib/nvmf/rdma.o 00:02:32.275 CC lib/iscsi/conn.o 00:02:32.275 CC lib/iscsi/init_grp.o 00:02:32.275 CC lib/iscsi/iscsi.o 00:02:32.540 CC lib/nvmf/auth.o 00:02:32.540 CC lib/ftl/ftl_band.o 00:02:32.540 CC lib/iscsi/md5.o 00:02:32.540 CC lib/vhost/vhost.o 00:02:32.798 CC lib/vhost/vhost_rpc.o 00:02:32.798 CC lib/vhost/vhost_scsi.o 00:02:32.798 CC lib/iscsi/param.o 00:02:32.798 CC lib/ftl/ftl_band_ops.o 00:02:32.798 CC lib/ftl/ftl_writer.o 00:02:33.056 CC lib/ftl/ftl_rq.o 00:02:33.056 CC lib/vhost/vhost_blk.o 00:02:33.056 CC lib/iscsi/portal_grp.o 00:02:33.314 CC lib/ftl/ftl_reloc.o 00:02:33.314 CC lib/ftl/ftl_l2p_cache.o 00:02:33.314 CC lib/ftl/ftl_p2l.o 00:02:33.314 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.314 CC lib/iscsi/tgt_node.o 00:02:33.572 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.572 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.572 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.572 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.572 CC lib/iscsi/iscsi_subsystem.o 00:02:33.830 CC lib/iscsi/iscsi_rpc.o 00:02:33.830 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.830 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.830 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.830 CC lib/vhost/rte_vhost_user.o 00:02:33.830 CC lib/iscsi/task.o 00:02:34.088 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:34.088 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:34.088 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:34.088 LIB libspdk_nvmf.a 00:02:34.088 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:34.088 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:34.088 CC lib/ftl/utils/ftl_conf.o 00:02:34.088 LIB libspdk_iscsi.a 00:02:34.088 CC lib/ftl/utils/ftl_md.o 00:02:34.346 SO libspdk_nvmf.so.18.1 00:02:34.346 CC lib/ftl/utils/ftl_mempool.o 00:02:34.346 SO libspdk_iscsi.so.8.0 00:02:34.346 CC lib/ftl/utils/ftl_bitmap.o 00:02:34.346 CC lib/ftl/utils/ftl_property.o 00:02:34.346 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:34.346 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:34.346 SYMLINK libspdk_nvmf.so 00:02:34.346 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:34.346 SYMLINK libspdk_iscsi.so 00:02:34.603 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:34.603 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:34.603 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:34.603 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:34.603 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:34.603 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:34.603 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:34.603 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:34.603 CC lib/ftl/base/ftl_base_dev.o 00:02:34.603 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.603 CC lib/ftl/ftl_trace.o 00:02:34.861 LIB libspdk_vhost.a 00:02:34.861 LIB libspdk_ftl.a 00:02:35.118 SO libspdk_vhost.so.8.0 00:02:35.118 SYMLINK libspdk_vhost.so 00:02:35.118 SO libspdk_ftl.so.9.0 00:02:35.683 SYMLINK libspdk_ftl.so 00:02:35.941 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.941 CC module/accel/error/accel_error.o 00:02:35.941 CC module/sock/posix/posix.o 00:02:35.941 CC module/sock/uring/uring.o 00:02:35.941 CC module/blob/bdev/blob_bdev.o 00:02:35.941 CC module/accel/ioat/accel_ioat.o 00:02:35.941 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.941 CC module/keyring/linux/keyring.o 00:02:35.941 CC module/keyring/file/keyring.o 00:02:35.941 CC module/accel/dsa/accel_dsa.o 00:02:36.198 LIB libspdk_env_dpdk_rpc.a 00:02:36.198 SO libspdk_env_dpdk_rpc.so.6.0 00:02:36.198 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.198 CC module/keyring/linux/keyring_rpc.o 00:02:36.198 CC module/keyring/file/keyring_rpc.o 00:02:36.198 CC module/accel/error/accel_error_rpc.o 00:02:36.198 LIB libspdk_scheduler_dynamic.a 00:02:36.198 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.198 SO libspdk_scheduler_dynamic.so.4.0 00:02:36.198 LIB libspdk_blob_bdev.a 00:02:36.456 LIB libspdk_keyring_linux.a 00:02:36.456 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.456 SO libspdk_blob_bdev.so.11.0 00:02:36.456 SO libspdk_keyring_linux.so.1.0 00:02:36.456 LIB libspdk_accel_error.a 00:02:36.456 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.456 LIB libspdk_keyring_file.a 00:02:36.456 CC module/accel/iaa/accel_iaa.o 00:02:36.456 SO libspdk_accel_error.so.2.0 00:02:36.456 SYMLINK libspdk_blob_bdev.so 00:02:36.456 SYMLINK libspdk_keyring_linux.so 00:02:36.456 LIB libspdk_accel_ioat.a 00:02:36.456 SO libspdk_keyring_file.so.1.0 00:02:36.456 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.456 SYMLINK libspdk_accel_error.so 00:02:36.456 SO libspdk_accel_ioat.so.6.0 00:02:36.456 SYMLINK libspdk_keyring_file.so 00:02:36.456 LIB libspdk_accel_dsa.a 00:02:36.456 SYMLINK libspdk_accel_ioat.so 00:02:36.456 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.456 SO libspdk_accel_dsa.so.5.0 00:02:36.714 LIB libspdk_accel_iaa.a 00:02:36.714 SYMLINK libspdk_accel_dsa.so 00:02:36.714 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.714 SO libspdk_accel_iaa.so.3.0 00:02:36.714 CC module/bdev/delay/vbdev_delay.o 00:02:36.714 CC module/bdev/error/vbdev_error.o 00:02:36.714 CC module/bdev/gpt/gpt.o 00:02:36.714 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.714 SYMLINK libspdk_accel_iaa.so 00:02:36.714 LIB libspdk_sock_uring.a 00:02:36.714 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.714 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.714 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.714 SO libspdk_sock_uring.so.5.0 00:02:36.714 LIB libspdk_sock_posix.a 00:02:36.714 LIB libspdk_scheduler_gscheduler.a 00:02:36.714 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.971 SO libspdk_sock_posix.so.6.0 00:02:36.971 SYMLINK libspdk_sock_uring.so 00:02:36.971 SO libspdk_scheduler_gscheduler.so.4.0 00:02:36.971 SYMLINK libspdk_sock_posix.so 00:02:36.971 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.971 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.971 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.971 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.971 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.971 LIB libspdk_bdev_error.a 00:02:36.971 SO libspdk_bdev_error.so.6.0 00:02:36.971 CC module/bdev/malloc/bdev_malloc.o 00:02:36.971 CC module/bdev/null/bdev_null.o 00:02:36.971 SYMLINK libspdk_bdev_error.so 00:02:36.971 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.971 CC module/bdev/null/bdev_null_rpc.o 00:02:37.229 LIB libspdk_bdev_delay.a 00:02:37.229 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.229 CC module/bdev/nvme/bdev_nvme.o 00:02:37.229 LIB libspdk_blobfs_bdev.a 00:02:37.229 SO libspdk_bdev_delay.so.6.0 00:02:37.229 SO libspdk_blobfs_bdev.so.6.0 00:02:37.229 SYMLINK libspdk_bdev_delay.so 00:02:37.229 SYMLINK libspdk_blobfs_bdev.so 00:02:37.229 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.229 LIB libspdk_bdev_gpt.a 00:02:37.229 SO libspdk_bdev_gpt.so.6.0 00:02:37.229 LIB libspdk_bdev_null.a 00:02:37.229 CC module/bdev/nvme/nvme_rpc.o 00:02:37.229 SO libspdk_bdev_null.so.6.0 00:02:37.229 SYMLINK libspdk_bdev_gpt.so 00:02:37.229 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.487 LIB libspdk_bdev_malloc.a 00:02:37.487 SYMLINK libspdk_bdev_null.so 00:02:37.487 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.487 SO libspdk_bdev_malloc.so.6.0 00:02:37.487 CC module/bdev/raid/bdev_raid.o 00:02:37.487 CC module/bdev/split/vbdev_split.o 00:02:37.487 LIB libspdk_bdev_lvol.a 00:02:37.487 SO libspdk_bdev_lvol.so.6.0 00:02:37.487 SYMLINK libspdk_bdev_malloc.so 00:02:37.487 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.487 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.487 CC module/bdev/raid/raid0.o 00:02:37.487 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.487 SYMLINK libspdk_bdev_lvol.so 00:02:37.487 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.744 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.744 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.744 CC module/bdev/raid/raid1.o 00:02:37.744 CC module/bdev/raid/concat.o 00:02:37.744 CC module/bdev/nvme/vbdev_opal.o 00:02:37.744 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.744 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:38.002 LIB libspdk_bdev_split.a 00:02:38.002 SO libspdk_bdev_split.so.6.0 00:02:38.002 LIB libspdk_bdev_passthru.a 00:02:38.002 LIB libspdk_bdev_zone_block.a 00:02:38.002 SO libspdk_bdev_zone_block.so.6.0 00:02:38.002 SYMLINK libspdk_bdev_split.so 00:02:38.002 SO libspdk_bdev_passthru.so.6.0 00:02:38.002 SYMLINK libspdk_bdev_zone_block.so 00:02:38.002 SYMLINK libspdk_bdev_passthru.so 00:02:38.260 CC module/bdev/ftl/bdev_ftl.o 00:02:38.260 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:38.260 CC module/bdev/aio/bdev_aio.o 00:02:38.260 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.260 CC module/bdev/uring/bdev_uring.o 00:02:38.260 CC module/bdev/uring/bdev_uring_rpc.o 00:02:38.260 CC module/bdev/iscsi/bdev_iscsi.o 00:02:38.260 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:38.518 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.518 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.518 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:38.518 LIB libspdk_bdev_raid.a 00:02:38.518 LIB libspdk_bdev_ftl.a 00:02:38.518 SO libspdk_bdev_ftl.so.6.0 00:02:38.518 SO libspdk_bdev_raid.so.6.0 00:02:38.518 LIB libspdk_bdev_aio.a 00:02:38.518 SO libspdk_bdev_aio.so.6.0 00:02:38.518 LIB libspdk_bdev_uring.a 00:02:38.518 SYMLINK libspdk_bdev_ftl.so 00:02:38.518 LIB libspdk_bdev_iscsi.a 00:02:38.518 SYMLINK libspdk_bdev_raid.so 00:02:38.518 SO libspdk_bdev_uring.so.6.0 00:02:38.775 SYMLINK libspdk_bdev_aio.so 00:02:38.775 SO libspdk_bdev_iscsi.so.6.0 00:02:38.775 SYMLINK libspdk_bdev_uring.so 00:02:38.775 SYMLINK libspdk_bdev_iscsi.so 00:02:38.775 LIB libspdk_bdev_virtio.a 00:02:38.775 SO libspdk_bdev_virtio.so.6.0 00:02:39.031 SYMLINK libspdk_bdev_virtio.so 00:02:39.287 LIB libspdk_bdev_nvme.a 00:02:39.544 SO libspdk_bdev_nvme.so.7.0 00:02:39.544 SYMLINK libspdk_bdev_nvme.so 00:02:40.110 CC module/event/subsystems/iobuf/iobuf.o 00:02:40.110 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:40.110 CC module/event/subsystems/keyring/keyring.o 00:02:40.110 CC module/event/subsystems/vmd/vmd.o 00:02:40.110 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:40.110 CC module/event/subsystems/scheduler/scheduler.o 00:02:40.110 CC module/event/subsystems/sock/sock.o 00:02:40.110 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:40.110 LIB libspdk_event_keyring.a 00:02:40.110 LIB libspdk_event_vhost_blk.a 00:02:40.110 LIB libspdk_event_iobuf.a 00:02:40.368 LIB libspdk_event_scheduler.a 00:02:40.368 LIB libspdk_event_vmd.a 00:02:40.368 SO libspdk_event_keyring.so.1.0 00:02:40.368 LIB libspdk_event_sock.a 00:02:40.368 SO libspdk_event_vhost_blk.so.3.0 00:02:40.368 SO libspdk_event_iobuf.so.3.0 00:02:40.368 SO libspdk_event_sock.so.5.0 00:02:40.368 SO libspdk_event_scheduler.so.4.0 00:02:40.368 SO libspdk_event_vmd.so.6.0 00:02:40.368 SYMLINK libspdk_event_keyring.so 00:02:40.368 SYMLINK libspdk_event_vhost_blk.so 00:02:40.368 SYMLINK libspdk_event_scheduler.so 00:02:40.368 SYMLINK libspdk_event_iobuf.so 00:02:40.368 SYMLINK libspdk_event_sock.so 00:02:40.368 SYMLINK libspdk_event_vmd.so 00:02:40.625 CC module/event/subsystems/accel/accel.o 00:02:40.625 LIB libspdk_event_accel.a 00:02:40.883 SO libspdk_event_accel.so.6.0 00:02:40.883 SYMLINK libspdk_event_accel.so 00:02:41.139 CC module/event/subsystems/bdev/bdev.o 00:02:41.405 LIB libspdk_event_bdev.a 00:02:41.405 SO libspdk_event_bdev.so.6.0 00:02:41.405 SYMLINK libspdk_event_bdev.so 00:02:41.664 CC module/event/subsystems/scsi/scsi.o 00:02:41.664 CC module/event/subsystems/ublk/ublk.o 00:02:41.664 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.664 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.664 CC module/event/subsystems/nbd/nbd.o 00:02:41.922 LIB libspdk_event_ublk.a 00:02:41.922 LIB libspdk_event_nbd.a 00:02:41.922 LIB libspdk_event_scsi.a 00:02:41.922 SO libspdk_event_ublk.so.3.0 00:02:41.922 SO libspdk_event_nbd.so.6.0 00:02:41.922 SO libspdk_event_scsi.so.6.0 00:02:41.922 SYMLINK libspdk_event_ublk.so 00:02:41.922 SYMLINK libspdk_event_nbd.so 00:02:41.922 LIB libspdk_event_nvmf.a 00:02:41.922 SYMLINK libspdk_event_scsi.so 00:02:41.922 SO libspdk_event_nvmf.so.6.0 00:02:42.180 SYMLINK libspdk_event_nvmf.so 00:02:42.180 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.180 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.438 LIB libspdk_event_vhost_scsi.a 00:02:42.438 LIB libspdk_event_iscsi.a 00:02:42.438 SO libspdk_event_iscsi.so.6.0 00:02:42.438 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.438 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.438 SYMLINK libspdk_event_iscsi.so 00:02:42.695 SO libspdk.so.6.0 00:02:42.696 SYMLINK libspdk.so 00:02:42.953 CC app/trace_record/trace_record.o 00:02:42.953 TEST_HEADER include/spdk/accel.h 00:02:42.953 TEST_HEADER include/spdk/accel_module.h 00:02:42.953 TEST_HEADER include/spdk/assert.h 00:02:42.953 TEST_HEADER include/spdk/barrier.h 00:02:42.953 TEST_HEADER include/spdk/base64.h 00:02:42.953 TEST_HEADER include/spdk/bdev.h 00:02:42.953 TEST_HEADER include/spdk/bdev_module.h 00:02:42.953 TEST_HEADER include/spdk/bdev_zone.h 00:02:42.953 TEST_HEADER include/spdk/bit_array.h 00:02:42.953 TEST_HEADER include/spdk/bit_pool.h 00:02:42.953 TEST_HEADER include/spdk/blob_bdev.h 00:02:42.953 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:42.953 CXX app/trace/trace.o 00:02:42.953 TEST_HEADER include/spdk/blobfs.h 00:02:42.953 TEST_HEADER include/spdk/blob.h 00:02:42.953 TEST_HEADER include/spdk/conf.h 00:02:42.953 TEST_HEADER include/spdk/config.h 00:02:42.953 TEST_HEADER include/spdk/cpuset.h 00:02:42.953 TEST_HEADER include/spdk/crc16.h 00:02:42.953 TEST_HEADER include/spdk/crc32.h 00:02:42.953 TEST_HEADER include/spdk/crc64.h 00:02:42.953 TEST_HEADER include/spdk/dif.h 00:02:42.953 TEST_HEADER include/spdk/dma.h 00:02:42.953 CC app/nvmf_tgt/nvmf_main.o 00:02:42.953 TEST_HEADER include/spdk/endian.h 00:02:42.953 TEST_HEADER include/spdk/env_dpdk.h 00:02:42.953 TEST_HEADER include/spdk/env.h 00:02:42.953 TEST_HEADER include/spdk/event.h 00:02:42.953 TEST_HEADER include/spdk/fd_group.h 00:02:42.953 TEST_HEADER include/spdk/fd.h 00:02:42.953 TEST_HEADER include/spdk/file.h 00:02:42.953 TEST_HEADER include/spdk/ftl.h 00:02:42.953 TEST_HEADER include/spdk/gpt_spec.h 00:02:42.953 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.953 TEST_HEADER include/spdk/hexlify.h 00:02:42.953 TEST_HEADER include/spdk/histogram_data.h 00:02:42.953 TEST_HEADER include/spdk/idxd.h 00:02:42.953 CC app/spdk_tgt/spdk_tgt.o 00:02:42.953 TEST_HEADER include/spdk/idxd_spec.h 00:02:42.953 TEST_HEADER include/spdk/init.h 00:02:43.210 TEST_HEADER include/spdk/ioat.h 00:02:43.210 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.210 CC examples/util/zipf/zipf.o 00:02:43.210 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.210 CC test/thread/poller_perf/poller_perf.o 00:02:43.210 TEST_HEADER include/spdk/json.h 00:02:43.210 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.210 TEST_HEADER include/spdk/keyring.h 00:02:43.210 TEST_HEADER include/spdk/keyring_module.h 00:02:43.210 TEST_HEADER include/spdk/likely.h 00:02:43.210 TEST_HEADER include/spdk/log.h 00:02:43.210 TEST_HEADER include/spdk/lvol.h 00:02:43.210 CC test/app/bdev_svc/bdev_svc.o 00:02:43.210 TEST_HEADER include/spdk/memory.h 00:02:43.210 TEST_HEADER include/spdk/mmio.h 00:02:43.210 TEST_HEADER include/spdk/nbd.h 00:02:43.210 TEST_HEADER include/spdk/notify.h 00:02:43.210 TEST_HEADER include/spdk/nvme.h 00:02:43.210 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.210 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.210 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.210 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.210 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.210 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.210 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.210 TEST_HEADER include/spdk/nvmf.h 00:02:43.210 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.210 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.210 TEST_HEADER include/spdk/opal.h 00:02:43.210 TEST_HEADER include/spdk/opal_spec.h 00:02:43.210 CC test/dma/test_dma/test_dma.o 00:02:43.210 TEST_HEADER include/spdk/pci_ids.h 00:02:43.210 TEST_HEADER include/spdk/pipe.h 00:02:43.210 TEST_HEADER include/spdk/queue.h 00:02:43.210 TEST_HEADER include/spdk/reduce.h 00:02:43.210 TEST_HEADER include/spdk/rpc.h 00:02:43.210 TEST_HEADER include/spdk/scheduler.h 00:02:43.210 TEST_HEADER include/spdk/scsi.h 00:02:43.210 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.210 TEST_HEADER include/spdk/sock.h 00:02:43.210 TEST_HEADER include/spdk/stdinc.h 00:02:43.210 TEST_HEADER include/spdk/string.h 00:02:43.210 TEST_HEADER include/spdk/thread.h 00:02:43.210 TEST_HEADER include/spdk/trace.h 00:02:43.210 TEST_HEADER include/spdk/trace_parser.h 00:02:43.210 TEST_HEADER include/spdk/tree.h 00:02:43.210 TEST_HEADER include/spdk/ublk.h 00:02:43.210 TEST_HEADER include/spdk/util.h 00:02:43.210 TEST_HEADER include/spdk/uuid.h 00:02:43.211 TEST_HEADER include/spdk/version.h 00:02:43.211 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.211 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.211 LINK nvmf_tgt 00:02:43.211 TEST_HEADER include/spdk/vhost.h 00:02:43.211 TEST_HEADER include/spdk/vmd.h 00:02:43.211 TEST_HEADER include/spdk/xor.h 00:02:43.211 TEST_HEADER include/spdk/zipf.h 00:02:43.211 CXX test/cpp_headers/accel.o 00:02:43.211 LINK zipf 00:02:43.211 LINK poller_perf 00:02:43.211 LINK iscsi_tgt 00:02:43.468 LINK spdk_trace_record 00:02:43.468 LINK bdev_svc 00:02:43.468 LINK spdk_tgt 00:02:43.468 CXX test/cpp_headers/accel_module.o 00:02:43.468 LINK spdk_trace 00:02:43.468 CXX test/cpp_headers/assert.o 00:02:43.468 CXX test/cpp_headers/barrier.o 00:02:43.725 CC examples/ioat/perf/perf.o 00:02:43.725 LINK test_dma 00:02:43.725 CXX test/cpp_headers/base64.o 00:02:43.725 CC examples/ioat/verify/verify.o 00:02:43.725 CC test/rpc_client/rpc_client_test.o 00:02:43.725 CC test/event/event_perf/event_perf.o 00:02:43.725 CC app/spdk_lspci/spdk_lspci.o 00:02:43.725 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.725 CC test/event/reactor/reactor.o 00:02:43.725 CC test/env/mem_callbacks/mem_callbacks.o 00:02:43.983 CXX test/cpp_headers/bdev.o 00:02:43.983 LINK ioat_perf 00:02:43.983 LINK spdk_lspci 00:02:43.983 LINK event_perf 00:02:43.983 LINK verify 00:02:43.983 LINK rpc_client_test 00:02:43.983 CC test/event/reactor_perf/reactor_perf.o 00:02:43.983 LINK reactor 00:02:43.983 CXX test/cpp_headers/bdev_module.o 00:02:44.240 CC test/env/vtophys/vtophys.o 00:02:44.240 LINK reactor_perf 00:02:44.240 CC app/spdk_nvme_perf/perf.o 00:02:44.240 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:44.240 LINK nvme_fuzz 00:02:44.240 CC test/app/histogram_perf/histogram_perf.o 00:02:44.240 CC test/app/jsoncat/jsoncat.o 00:02:44.240 CXX test/cpp_headers/bdev_zone.o 00:02:44.240 LINK vtophys 00:02:44.240 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.499 LINK env_dpdk_post_init 00:02:44.499 LINK histogram_perf 00:02:44.499 LINK jsoncat 00:02:44.499 LINK mem_callbacks 00:02:44.499 CC test/event/app_repeat/app_repeat.o 00:02:44.499 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.499 LINK lsvmd 00:02:44.499 CXX test/cpp_headers/bit_array.o 00:02:44.499 CC test/event/scheduler/scheduler.o 00:02:44.757 LINK app_repeat 00:02:44.757 CC test/app/stub/stub.o 00:02:44.757 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.757 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.757 CC examples/vmd/led/led.o 00:02:44.757 CC test/env/memory/memory_ut.o 00:02:44.757 CXX test/cpp_headers/bit_pool.o 00:02:44.757 LINK led 00:02:44.757 LINK scheduler 00:02:44.757 CXX test/cpp_headers/blob_bdev.o 00:02:44.757 LINK stub 00:02:45.014 CC test/env/pci/pci_ut.o 00:02:45.014 CC test/accel/dif/dif.o 00:02:45.014 CXX test/cpp_headers/blobfs_bdev.o 00:02:45.014 LINK vhost_fuzz 00:02:45.014 LINK spdk_nvme_perf 00:02:45.272 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:45.272 CC examples/idxd/perf/perf.o 00:02:45.272 CXX test/cpp_headers/blobfs.o 00:02:45.272 CC examples/thread/thread/thread_ex.o 00:02:45.272 LINK interrupt_tgt 00:02:45.272 LINK pci_ut 00:02:45.272 CC app/spdk_nvme_identify/identify.o 00:02:45.530 CC examples/sock/hello_world/hello_sock.o 00:02:45.530 CXX test/cpp_headers/blob.o 00:02:45.530 LINK dif 00:02:45.530 LINK idxd_perf 00:02:45.530 LINK thread 00:02:45.530 CXX test/cpp_headers/conf.o 00:02:45.530 CC app/spdk_nvme_discover/discovery_aer.o 00:02:45.530 CXX test/cpp_headers/config.o 00:02:45.787 LINK hello_sock 00:02:45.787 CXX test/cpp_headers/cpuset.o 00:02:45.787 LINK memory_ut 00:02:45.787 CXX test/cpp_headers/crc16.o 00:02:45.787 LINK spdk_nvme_discover 00:02:45.787 CC test/nvme/aer/aer.o 00:02:45.787 CC test/blobfs/mkfs/mkfs.o 00:02:46.045 CC test/bdev/bdevio/bdevio.o 00:02:46.045 CC test/lvol/esnap/esnap.o 00:02:46.045 CC examples/accel/perf/accel_perf.o 00:02:46.045 CXX test/cpp_headers/crc32.o 00:02:46.045 CXX test/cpp_headers/crc64.o 00:02:46.045 CXX test/cpp_headers/dif.o 00:02:46.045 LINK iscsi_fuzz 00:02:46.045 LINK mkfs 00:02:46.303 LINK spdk_nvme_identify 00:02:46.303 LINK aer 00:02:46.303 CXX test/cpp_headers/dma.o 00:02:46.303 CC test/nvme/reset/reset.o 00:02:46.303 CXX test/cpp_headers/endian.o 00:02:46.303 LINK bdevio 00:02:46.303 CC examples/blob/hello_world/hello_blob.o 00:02:46.560 CC app/spdk_top/spdk_top.o 00:02:46.560 CC examples/blob/cli/blobcli.o 00:02:46.560 LINK accel_perf 00:02:46.560 CC app/vhost/vhost.o 00:02:46.560 CC examples/nvme/hello_world/hello_world.o 00:02:46.560 LINK reset 00:02:46.560 CXX test/cpp_headers/env_dpdk.o 00:02:46.560 CXX test/cpp_headers/env.o 00:02:46.560 CXX test/cpp_headers/event.o 00:02:46.817 LINK vhost 00:02:46.817 LINK hello_blob 00:02:46.817 LINK hello_world 00:02:46.817 CC test/nvme/sgl/sgl.o 00:02:46.817 CC test/nvme/e2edp/nvme_dp.o 00:02:46.817 CXX test/cpp_headers/fd_group.o 00:02:46.817 CC test/nvme/overhead/overhead.o 00:02:46.817 LINK blobcli 00:02:47.075 CC test/nvme/err_injection/err_injection.o 00:02:47.075 CC test/nvme/startup/startup.o 00:02:47.075 CC examples/nvme/reconnect/reconnect.o 00:02:47.075 CXX test/cpp_headers/fd.o 00:02:47.075 LINK sgl 00:02:47.075 LINK nvme_dp 00:02:47.075 CXX test/cpp_headers/file.o 00:02:47.075 LINK startup 00:02:47.333 LINK overhead 00:02:47.333 LINK err_injection 00:02:47.333 LINK spdk_top 00:02:47.333 CC app/spdk_dd/spdk_dd.o 00:02:47.333 CXX test/cpp_headers/ftl.o 00:02:47.333 LINK reconnect 00:02:47.333 CC test/nvme/reserve/reserve.o 00:02:47.333 CC app/fio/nvme/fio_plugin.o 00:02:47.590 CC test/nvme/simple_copy/simple_copy.o 00:02:47.590 CC test/nvme/connect_stress/connect_stress.o 00:02:47.590 CC app/fio/bdev/fio_plugin.o 00:02:47.590 CXX test/cpp_headers/gpt_spec.o 00:02:47.590 CC test/nvme/boot_partition/boot_partition.o 00:02:47.590 LINK reserve 00:02:47.590 LINK connect_stress 00:02:47.847 LINK simple_copy 00:02:47.847 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:47.847 CXX test/cpp_headers/hexlify.o 00:02:47.847 LINK boot_partition 00:02:47.847 LINK spdk_dd 00:02:47.847 CXX test/cpp_headers/histogram_data.o 00:02:47.847 CC test/nvme/compliance/nvme_compliance.o 00:02:48.103 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.103 CC examples/nvme/arbitration/arbitration.o 00:02:48.103 LINK spdk_bdev 00:02:48.103 LINK spdk_nvme 00:02:48.103 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.103 CXX test/cpp_headers/idxd.o 00:02:48.103 CXX test/cpp_headers/idxd_spec.o 00:02:48.103 CC test/nvme/fdp/fdp.o 00:02:48.103 LINK fused_ordering 00:02:48.360 LINK nvme_manage 00:02:48.360 LINK doorbell_aers 00:02:48.360 LINK nvme_compliance 00:02:48.360 CXX test/cpp_headers/init.o 00:02:48.360 LINK arbitration 00:02:48.360 CC examples/bdev/hello_world/hello_bdev.o 00:02:48.361 CXX test/cpp_headers/ioat.o 00:02:48.361 CC examples/bdev/bdevperf/bdevperf.o 00:02:48.361 CXX test/cpp_headers/ioat_spec.o 00:02:48.617 CC test/nvme/cuse/cuse.o 00:02:48.617 LINK fdp 00:02:48.617 CC examples/nvme/hotplug/hotplug.o 00:02:48.617 CXX test/cpp_headers/iscsi_spec.o 00:02:48.617 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:48.617 LINK hello_bdev 00:02:48.617 CC examples/nvme/abort/abort.o 00:02:48.617 CXX test/cpp_headers/json.o 00:02:48.617 CXX test/cpp_headers/jsonrpc.o 00:02:48.617 CXX test/cpp_headers/keyring.o 00:02:48.874 LINK cmb_copy 00:02:48.874 CXX test/cpp_headers/keyring_module.o 00:02:48.874 LINK hotplug 00:02:48.874 CXX test/cpp_headers/likely.o 00:02:48.874 CXX test/cpp_headers/log.o 00:02:48.874 CXX test/cpp_headers/lvol.o 00:02:48.875 CXX test/cpp_headers/memory.o 00:02:48.875 CXX test/cpp_headers/mmio.o 00:02:48.875 CXX test/cpp_headers/nbd.o 00:02:48.875 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:49.131 CXX test/cpp_headers/notify.o 00:02:49.131 LINK abort 00:02:49.131 CXX test/cpp_headers/nvme.o 00:02:49.131 CXX test/cpp_headers/nvme_intel.o 00:02:49.131 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.131 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.131 LINK bdevperf 00:02:49.131 CXX test/cpp_headers/nvme_spec.o 00:02:49.131 LINK pmr_persistence 00:02:49.131 CXX test/cpp_headers/nvme_zns.o 00:02:49.131 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.388 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.388 CXX test/cpp_headers/nvmf.o 00:02:49.388 CXX test/cpp_headers/nvmf_spec.o 00:02:49.388 CXX test/cpp_headers/nvmf_transport.o 00:02:49.388 CXX test/cpp_headers/opal.o 00:02:49.388 CXX test/cpp_headers/opal_spec.o 00:02:49.388 CXX test/cpp_headers/pci_ids.o 00:02:49.388 CXX test/cpp_headers/pipe.o 00:02:49.646 CXX test/cpp_headers/queue.o 00:02:49.646 CXX test/cpp_headers/reduce.o 00:02:49.646 CXX test/cpp_headers/rpc.o 00:02:49.646 CXX test/cpp_headers/scheduler.o 00:02:49.646 CXX test/cpp_headers/scsi.o 00:02:49.646 CXX test/cpp_headers/scsi_spec.o 00:02:49.646 CC examples/nvmf/nvmf/nvmf.o 00:02:49.646 CXX test/cpp_headers/sock.o 00:02:49.646 CXX test/cpp_headers/stdinc.o 00:02:49.646 CXX test/cpp_headers/string.o 00:02:49.646 CXX test/cpp_headers/thread.o 00:02:49.646 CXX test/cpp_headers/trace.o 00:02:49.905 CXX test/cpp_headers/trace_parser.o 00:02:49.905 CXX test/cpp_headers/tree.o 00:02:49.905 LINK cuse 00:02:49.905 CXX test/cpp_headers/ublk.o 00:02:49.905 CXX test/cpp_headers/util.o 00:02:49.905 CXX test/cpp_headers/uuid.o 00:02:49.905 CXX test/cpp_headers/version.o 00:02:49.905 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.905 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.905 CXX test/cpp_headers/vhost.o 00:02:49.905 LINK nvmf 00:02:49.905 CXX test/cpp_headers/vmd.o 00:02:49.905 CXX test/cpp_headers/xor.o 00:02:49.905 CXX test/cpp_headers/zipf.o 00:02:51.276 LINK esnap 00:02:51.841 00:02:51.841 real 1m3.008s 00:02:51.841 user 6m28.822s 00:02:51.841 sys 1m33.356s 00:02:51.841 09:28:46 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:51.841 09:28:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:51.841 ************************************ 00:02:51.841 END TEST make 00:02:51.841 ************************************ 00:02:51.841 09:28:46 -- common/autotest_common.sh@1142 -- $ return 0 00:02:51.841 09:28:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:51.841 09:28:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:51.841 09:28:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:51.841 09:28:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.841 09:28:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:51.841 09:28:46 -- pm/common@44 -- $ pid=5297 00:02:51.841 09:28:46 -- pm/common@50 -- $ kill -TERM 5297 00:02:51.841 09:28:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:51.841 09:28:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:51.841 09:28:46 -- pm/common@44 -- $ pid=5299 00:02:51.841 09:28:46 -- pm/common@50 -- $ kill -TERM 5299 00:02:52.108 09:28:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:52.108 09:28:46 -- nvmf/common.sh@7 -- # uname -s 00:02:52.108 09:28:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.108 09:28:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.108 09:28:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.108 09:28:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.108 09:28:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.108 09:28:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.108 09:28:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.108 09:28:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.108 09:28:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.108 09:28:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.108 09:28:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:02:52.108 09:28:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:02:52.108 09:28:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.108 09:28:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.108 09:28:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:02:52.108 09:28:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.108 09:28:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:52.108 09:28:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.108 09:28:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.108 09:28:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.108 09:28:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.108 09:28:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.108 09:28:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.108 09:28:46 -- paths/export.sh@5 -- # export PATH 00:02:52.108 09:28:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.108 09:28:46 -- nvmf/common.sh@47 -- # : 0 00:02:52.108 09:28:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:52.108 09:28:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:52.108 09:28:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.108 09:28:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.108 09:28:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.108 09:28:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:52.108 09:28:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:52.108 09:28:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:52.108 09:28:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:52.108 09:28:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:52.108 09:28:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:52.108 09:28:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:52.108 09:28:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:52.108 09:28:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:52.108 09:28:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:52.108 09:28:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:52.108 09:28:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:52.108 09:28:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:52.108 09:28:46 -- spdk/autotest.sh@48 -- # udevadm_pid=52919 00:02:52.108 09:28:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:52.108 09:28:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:52.108 09:28:46 -- pm/common@17 -- # local monitor 00:02:52.108 09:28:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.108 09:28:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.108 09:28:46 -- pm/common@21 -- # date +%s 00:02:52.108 09:28:46 -- pm/common@25 -- # sleep 1 00:02:52.108 09:28:46 -- pm/common@21 -- # date +%s 00:02:52.108 09:28:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721035726 00:02:52.108 09:28:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721035726 00:02:52.108 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721035726_collect-vmstat.pm.log 00:02:52.108 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721035726_collect-cpu-load.pm.log 00:02:53.041 09:28:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.041 09:28:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.041 09:28:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:53.041 09:28:47 -- common/autotest_common.sh@10 -- # set +x 00:02:53.041 09:28:47 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.041 09:28:47 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:53.041 09:28:47 -- common/autotest_common.sh@10 -- # set +x 00:02:53.041 09:28:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:53.041 09:28:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:53.041 09:28:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:53.041 09:28:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.041 09:28:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:53.041 09:28:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.041 09:28:47 -- common/autotest_common.sh@1455 -- # uname 00:02:53.299 09:28:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:53.299 09:28:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.299 09:28:47 -- common/autotest_common.sh@1475 -- # uname 00:02:53.299 09:28:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:53.299 09:28:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:53.299 09:28:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:53.299 09:28:47 -- spdk/autotest.sh@72 -- # hash lcov 00:02:53.299 09:28:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:53.299 09:28:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:53.299 --rc lcov_branch_coverage=1 00:02:53.299 --rc lcov_function_coverage=1 00:02:53.299 --rc genhtml_branch_coverage=1 00:02:53.299 --rc genhtml_function_coverage=1 00:02:53.299 --rc genhtml_legend=1 00:02:53.299 --rc geninfo_all_blocks=1 00:02:53.299 ' 00:02:53.299 09:28:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:53.299 --rc lcov_branch_coverage=1 00:02:53.299 --rc lcov_function_coverage=1 00:02:53.299 --rc genhtml_branch_coverage=1 00:02:53.299 --rc genhtml_function_coverage=1 00:02:53.299 --rc genhtml_legend=1 00:02:53.299 --rc geninfo_all_blocks=1 00:02:53.299 ' 00:02:53.299 09:28:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:53.299 --rc lcov_branch_coverage=1 00:02:53.299 --rc lcov_function_coverage=1 00:02:53.299 --rc genhtml_branch_coverage=1 00:02:53.299 --rc genhtml_function_coverage=1 00:02:53.299 --rc genhtml_legend=1 00:02:53.299 --rc geninfo_all_blocks=1 00:02:53.299 --no-external' 00:02:53.299 09:28:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:53.299 --rc lcov_branch_coverage=1 00:02:53.299 --rc lcov_function_coverage=1 00:02:53.299 --rc genhtml_branch_coverage=1 00:02:53.299 --rc genhtml_function_coverage=1 00:02:53.299 --rc genhtml_legend=1 00:02:53.299 --rc geninfo_all_blocks=1 00:02:53.299 --no-external' 00:02:53.299 09:28:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:53.299 lcov: LCOV version 1.14 00:02:53.299 09:28:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:08.161 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:08.161 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:20.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:20.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:20.358 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:20.358 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:20.617 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:20.617 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:20.618 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:20.618 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:20.875 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:20.875 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:20.876 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:20.876 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:21.134 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:21.134 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:21.134 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:21.134 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:21.134 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:21.134 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:21.134 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:21.134 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:21.134 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:21.134 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:21.134 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:21.134 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:25.383 09:29:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:25.383 09:29:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:25.383 09:29:19 -- common/autotest_common.sh@10 -- # set +x 00:03:25.383 09:29:19 -- spdk/autotest.sh@91 -- # rm -f 00:03:25.383 09:29:19 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:25.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:25.383 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:25.383 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:25.383 09:29:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:25.383 09:29:19 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:25.383 09:29:19 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:25.383 09:29:19 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:25.383 09:29:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.383 09:29:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:25.383 09:29:19 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:25.383 09:29:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.383 09:29:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.383 09:29:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.383 09:29:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:25.383 09:29:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:25.383 09:29:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:25.383 09:29:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.383 09:29:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.383 09:29:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:25.383 09:29:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:25.383 09:29:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:25.383 09:29:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.383 09:29:19 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.384 09:29:19 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:25.384 09:29:19 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:25.384 09:29:19 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:25.384 09:29:19 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.384 09:29:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:25.384 09:29:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.384 09:29:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.384 09:29:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:25.384 09:29:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:25.384 09:29:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:25.384 No valid GPT data, bailing 00:03:25.384 09:29:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:25.641 09:29:19 -- scripts/common.sh@391 -- # pt= 00:03:25.641 09:29:19 -- scripts/common.sh@392 -- # return 1 00:03:25.641 09:29:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:25.641 1+0 records in 00:03:25.641 1+0 records out 00:03:25.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414297 s, 253 MB/s 00:03:25.641 09:29:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.641 09:29:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.641 09:29:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:25.641 09:29:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:25.641 09:29:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:25.641 No valid GPT data, bailing 00:03:25.641 09:29:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:25.641 09:29:19 -- scripts/common.sh@391 -- # pt= 00:03:25.641 09:29:19 -- scripts/common.sh@392 -- # return 1 00:03:25.641 09:29:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:25.641 1+0 records in 00:03:25.641 1+0 records out 00:03:25.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459454 s, 228 MB/s 00:03:25.641 09:29:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.641 09:29:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.641 09:29:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:25.641 09:29:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:25.641 09:29:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:25.641 No valid GPT data, bailing 00:03:25.641 09:29:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:25.641 09:29:20 -- scripts/common.sh@391 -- # pt= 00:03:25.641 09:29:20 -- scripts/common.sh@392 -- # return 1 00:03:25.641 09:29:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:25.641 1+0 records in 00:03:25.641 1+0 records out 00:03:25.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485762 s, 216 MB/s 00:03:25.641 09:29:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.641 09:29:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:25.641 09:29:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:25.641 09:29:20 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:25.641 09:29:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:25.641 No valid GPT data, bailing 00:03:25.641 09:29:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:25.641 09:29:20 -- scripts/common.sh@391 -- # pt= 00:03:25.641 09:29:20 -- scripts/common.sh@392 -- # return 1 00:03:25.641 09:29:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:25.898 1+0 records in 00:03:25.898 1+0 records out 00:03:25.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405276 s, 259 MB/s 00:03:25.898 09:29:20 -- spdk/autotest.sh@118 -- # sync 00:03:25.898 09:29:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:25.898 09:29:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:25.898 09:29:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:27.793 09:29:21 -- spdk/autotest.sh@124 -- # uname -s 00:03:27.793 09:29:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:27.793 09:29:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:27.793 09:29:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.793 09:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.793 09:29:21 -- common/autotest_common.sh@10 -- # set +x 00:03:27.793 ************************************ 00:03:27.793 START TEST setup.sh 00:03:27.793 ************************************ 00:03:27.793 09:29:21 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:27.793 * Looking for test storage... 00:03:27.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:27.793 09:29:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:27.793 09:29:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:27.793 09:29:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:27.793 09:29:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.793 09:29:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.793 09:29:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.793 ************************************ 00:03:27.793 START TEST acl 00:03:27.793 ************************************ 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:27.793 * Looking for test storage... 00:03:27.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:27.793 09:29:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.793 09:29:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:27.793 09:29:22 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.793 09:29:22 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:27.793 09:29:22 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:27.793 09:29:22 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:27.793 09:29:22 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:27.793 09:29:22 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:27.793 09:29:22 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.793 09:29:22 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.359 09:29:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.359 09:29:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.359 09:29:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.359 09:29:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.359 09:29:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.359 09:29:22 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.925 Hugepages 00:03:28.925 node hugesize free / total 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.925 00:03:28.925 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:28.925 09:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.183 09:29:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:29.184 09:29:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:29.184 09:29:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.184 09:29:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.184 09:29:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:29.184 ************************************ 00:03:29.184 START TEST denied 00:03:29.184 ************************************ 00:03:29.184 09:29:23 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:29.184 09:29:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:29.184 09:29:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:29.184 09:29:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:29.184 09:29:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.184 09:29:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:30.119 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.119 09:29:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.686 00:03:30.686 real 0m1.448s 00:03:30.686 user 0m0.543s 00:03:30.686 sys 0m0.839s 00:03:30.686 09:29:25 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.686 09:29:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:30.686 ************************************ 00:03:30.686 END TEST denied 00:03:30.686 ************************************ 00:03:30.686 09:29:25 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:30.686 09:29:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:30.686 09:29:25 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.686 09:29:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.686 09:29:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.686 ************************************ 00:03:30.686 START TEST allowed 00:03:30.686 ************************************ 00:03:30.686 09:29:25 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:30.686 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:30.686 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:30.686 09:29:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.686 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:30.686 09:29:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:31.659 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.659 09:29:25 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:32.226 00:03:32.226 real 0m1.515s 00:03:32.226 user 0m0.651s 00:03:32.226 sys 0m0.836s 00:03:32.226 09:29:26 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.226 09:29:26 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:32.226 ************************************ 00:03:32.226 END TEST allowed 00:03:32.226 ************************************ 00:03:32.226 09:29:26 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:32.226 00:03:32.226 real 0m4.713s 00:03:32.226 user 0m2.015s 00:03:32.226 sys 0m2.609s 00:03:32.226 09:29:26 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.226 09:29:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:32.226 ************************************ 00:03:32.226 END TEST acl 00:03:32.226 ************************************ 00:03:32.226 09:29:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:32.226 09:29:26 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:32.226 09:29:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.226 09:29:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.226 09:29:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:32.226 ************************************ 00:03:32.226 START TEST hugepages 00:03:32.226 ************************************ 00:03:32.226 09:29:26 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:32.486 * Looking for test storage... 00:03:32.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.486 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6036496 kB' 'MemAvailable: 7409700 kB' 'Buffers: 2436 kB' 'Cached: 1587520 kB' 'SwapCached: 0 kB' 'Active: 435988 kB' 'Inactive: 1258624 kB' 'Active(anon): 115144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 106316 kB' 'Mapped: 48696 kB' 'Shmem: 10488 kB' 'KReclaimable: 61348 kB' 'Slab: 132568 kB' 'SReclaimable: 61348 kB' 'SUnreclaim: 71220 kB' 'KernelStack: 6412 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 336496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.487 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.488 09:29:26 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:32.488 09:29:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.488 09:29:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.488 09:29:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.488 ************************************ 00:03:32.488 START TEST default_setup 00:03:32.488 ************************************ 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.488 09:29:26 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:33.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.320 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:33.320 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8114272 kB' 'MemAvailable: 9487304 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452488 kB' 'Inactive: 1258636 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122744 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132200 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71220 kB' 'KernelStack: 6320 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.320 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.321 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8114020 kB' 'MemAvailable: 9487052 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1258636 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132200 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71220 kB' 'KernelStack: 6304 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.322 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.323 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8115292 kB' 'MemAvailable: 9488324 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 1258636 kB' 'Active(anon): 131540 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122908 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132196 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71216 kB' 'KernelStack: 6272 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:33.326 nr_hugepages=1024 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.326 resv_hugepages=0 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.326 surplus_hugepages=0 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.326 anon_hugepages=0 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8115292 kB' 'MemAvailable: 9488324 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452592 kB' 'Inactive: 1258636 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48952 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132192 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71212 kB' 'KernelStack: 6304 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.326 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.327 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8114788 kB' 'MemUsed: 4127184 kB' 'SwapCached: 0 kB' 'Active: 452592 kB' 'Inactive: 1258636 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1589944 kB' 'Mapped: 48692 kB' 'AnonPages: 122964 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60980 kB' 'Slab: 132180 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.328 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.329 node0=1024 expecting 1024 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.329 00:03:33.329 real 0m0.959s 00:03:33.329 user 0m0.415s 00:03:33.329 sys 0m0.504s 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.329 09:29:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:33.329 ************************************ 00:03:33.329 END TEST default_setup 00:03:33.329 ************************************ 00:03:33.588 09:29:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.588 09:29:27 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:33.588 09:29:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.588 09:29:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.588 09:29:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.588 ************************************ 00:03:33.588 START TEST per_node_1G_alloc 00:03:33.588 ************************************ 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.588 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.589 09:29:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:33.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.853 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.853 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9158400 kB' 'MemAvailable: 10531432 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452872 kB' 'Inactive: 1258636 kB' 'Active(anon): 132028 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122912 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132152 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71172 kB' 'KernelStack: 6276 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.853 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.854 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9158400 kB' 'MemAvailable: 10531432 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452712 kB' 'Inactive: 1258636 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132184 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6288 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.855 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9158400 kB' 'MemAvailable: 10531432 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452620 kB' 'Inactive: 1258636 kB' 'Active(anon): 131776 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122984 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132176 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71196 kB' 'KernelStack: 6320 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 354384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.856 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.857 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.858 nr_hugepages=512 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:33.858 resv_hugepages=0 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.858 surplus_hugepages=0 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.858 anon_hugepages=0 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9158652 kB' 'MemAvailable: 10531680 kB' 'Buffers: 2436 kB' 'Cached: 1587504 kB' 'SwapCached: 0 kB' 'Active: 452348 kB' 'Inactive: 1258632 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258632 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122736 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132160 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6256 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.858 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.859 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.859 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:33.859 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.120 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9158652 kB' 'MemUsed: 3083320 kB' 'SwapCached: 0 kB' 'Active: 452596 kB' 'Inactive: 1258636 kB' 'Active(anon): 131752 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1589944 kB' 'Mapped: 48688 kB' 'AnonPages: 122940 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60980 kB' 'Slab: 132156 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.121 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.122 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.123 node0=512 expecting 512 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.123 00:03:34.123 real 0m0.541s 00:03:34.123 user 0m0.261s 00:03:34.123 sys 0m0.287s 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.123 09:29:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.123 ************************************ 00:03:34.123 END TEST per_node_1G_alloc 00:03:34.123 ************************************ 00:03:34.123 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.123 09:29:28 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:34.123 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.123 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.123 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.123 ************************************ 00:03:34.123 START TEST even_2G_alloc 00:03:34.123 ************************************ 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.123 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.383 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.383 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123528 kB' 'MemAvailable: 9496560 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 453132 kB' 'Inactive: 1258636 kB' 'Active(anon): 132288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132144 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71164 kB' 'KernelStack: 6260 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.383 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.384 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8123024 kB' 'MemAvailable: 9496056 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452396 kB' 'Inactive: 1258636 kB' 'Active(anon): 131552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122776 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132160 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6304 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.385 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.386 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.387 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8122268 kB' 'MemAvailable: 9495300 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452636 kB' 'Inactive: 1258636 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123012 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132156 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71176 kB' 'KernelStack: 6304 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:34.387 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.387 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.387 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.717 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.718 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.719 nr_hugepages=1024 00:03:34.719 resv_hugepages=0 00:03:34.719 surplus_hugepages=0 00:03:34.719 anon_hugepages=0 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8122268 kB' 'MemAvailable: 9495300 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 1258636 kB' 'Active(anon): 131540 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132152 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71172 kB' 'KernelStack: 6304 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.719 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.720 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8122268 kB' 'MemUsed: 4119704 kB' 'SwapCached: 0 kB' 'Active: 452692 kB' 'Inactive: 1258636 kB' 'Active(anon): 131848 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1589944 kB' 'Mapped: 48688 kB' 'AnonPages: 123036 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60980 kB' 'Slab: 132152 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.721 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.722 node0=1024 expecting 1024 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.722 00:03:34.722 real 0m0.537s 00:03:34.722 user 0m0.256s 00:03:34.722 sys 0m0.288s 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.722 ************************************ 00:03:34.722 END TEST even_2G_alloc 00:03:34.722 ************************************ 00:03:34.722 09:29:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.722 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:34.722 09:29:28 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:34.722 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.722 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.722 09:29:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.722 ************************************ 00:03:34.722 START TEST odd_alloc 00:03:34.722 ************************************ 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.722 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:34.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:34.983 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.983 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8139312 kB' 'MemAvailable: 9512344 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452908 kB' 'Inactive: 1258636 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123180 kB' 'Mapped: 48748 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132164 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71184 kB' 'KernelStack: 6292 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.983 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.984 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8139312 kB' 'MemAvailable: 9512344 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452848 kB' 'Inactive: 1258636 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123160 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132248 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71268 kB' 'KernelStack: 6304 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.985 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.986 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.247 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8139060 kB' 'MemAvailable: 9512092 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452908 kB' 'Inactive: 1258636 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123176 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132244 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71264 kB' 'KernelStack: 6288 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.248 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.249 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.250 nr_hugepages=1025 00:03:35.250 resv_hugepages=0 00:03:35.250 surplus_hugepages=0 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.250 anon_hugepages=0 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8138812 kB' 'MemAvailable: 9511844 kB' 'Buffers: 2436 kB' 'Cached: 1587508 kB' 'SwapCached: 0 kB' 'Active: 452444 kB' 'Inactive: 1258636 kB' 'Active(anon): 131600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122708 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132244 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71264 kB' 'KernelStack: 6288 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.250 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.251 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8138812 kB' 'MemUsed: 4103160 kB' 'SwapCached: 0 kB' 'Active: 452700 kB' 'Inactive: 1258636 kB' 'Active(anon): 131856 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1589944 kB' 'Mapped: 48688 kB' 'AnonPages: 123020 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60980 kB' 'Slab: 132236 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.252 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.253 node0=1025 expecting 1025 00:03:35.253 ************************************ 00:03:35.253 END TEST odd_alloc 00:03:35.253 ************************************ 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:35.253 00:03:35.253 real 0m0.558s 00:03:35.253 user 0m0.278s 00:03:35.253 sys 0m0.291s 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.253 09:29:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.253 09:29:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.253 09:29:29 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:35.253 09:29:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.253 09:29:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.253 09:29:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.253 ************************************ 00:03:35.253 START TEST custom_alloc 00:03:35.253 ************************************ 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.253 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:35.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:35.512 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.512 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.776 09:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9188680 kB' 'MemAvailable: 10561716 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452668 kB' 'Inactive: 1258640 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122900 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132248 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71268 kB' 'KernelStack: 6320 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.776 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9188432 kB' 'MemAvailable: 10561468 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1258640 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123032 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132236 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6304 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.777 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.778 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9188432 kB' 'MemAvailable: 10561468 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452416 kB' 'Inactive: 1258640 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132236 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6288 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.779 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.780 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.781 nr_hugepages=512 00:03:35.781 resv_hugepages=0 00:03:35.781 surplus_hugepages=0 00:03:35.781 anon_hugepages=0 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9188432 kB' 'MemAvailable: 10561468 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452724 kB' 'Inactive: 1258640 kB' 'Active(anon): 131880 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123000 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132236 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71256 kB' 'KernelStack: 6304 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.781 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.782 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9188432 kB' 'MemUsed: 3053540 kB' 'SwapCached: 0 kB' 'Active: 452764 kB' 'Inactive: 1258640 kB' 'Active(anon): 131920 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1589948 kB' 'Mapped: 48688 kB' 'AnonPages: 123036 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60980 kB' 'Slab: 132232 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.783 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.784 node0=512 expecting 512 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.784 00:03:35.784 real 0m0.609s 00:03:35.784 user 0m0.297s 00:03:35.784 sys 0m0.294s 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.784 09:29:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.784 ************************************ 00:03:35.784 END TEST custom_alloc 00:03:35.784 ************************************ 00:03:36.044 09:29:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.044 09:29:30 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:36.044 09:29:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.044 09:29:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.044 09:29:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.044 ************************************ 00:03:36.044 START TEST no_shrink_alloc 00:03:36.044 ************************************ 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.044 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.306 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.306 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8134276 kB' 'MemAvailable: 9507312 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452688 kB' 'Inactive: 1258640 kB' 'Active(anon): 131844 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123176 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132272 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71292 kB' 'KernelStack: 6288 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.306 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.307 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8134320 kB' 'MemAvailable: 9507356 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452744 kB' 'Inactive: 1258640 kB' 'Active(anon): 131900 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123060 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132272 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71292 kB' 'KernelStack: 6304 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.308 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.309 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8134320 kB' 'MemAvailable: 9507356 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452512 kB' 'Inactive: 1258640 kB' 'Active(anon): 131668 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123060 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132252 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71272 kB' 'KernelStack: 6304 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.310 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.311 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.312 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.313 nr_hugepages=1024 00:03:36.313 resv_hugepages=0 00:03:36.313 surplus_hugepages=0 00:03:36.313 anon_hugepages=0 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8134320 kB' 'MemAvailable: 9507356 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452864 kB' 'Inactive: 1258640 kB' 'Active(anon): 132020 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132252 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71272 kB' 'KernelStack: 6288 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.313 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.574 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.575 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8134320 kB' 'MemUsed: 4107652 kB' 'SwapCached: 0 kB' 'Active: 452764 kB' 'Inactive: 1258640 kB' 'Active(anon): 131920 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1589948 kB' 'Mapped: 48732 kB' 'AnonPages: 123072 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60980 kB' 'Slab: 132252 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.576 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.577 node0=1024 expecting 1024 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.577 09:29:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:36.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.839 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.839 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:36.839 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8134188 kB' 'MemAvailable: 9507224 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 453272 kB' 'Inactive: 1258640 kB' 'Active(anon): 132428 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123564 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132260 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71280 kB' 'KernelStack: 6356 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.839 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.840 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8134188 kB' 'MemAvailable: 9507224 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452824 kB' 'Inactive: 1258640 kB' 'Active(anon): 131980 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123124 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132244 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71264 kB' 'KernelStack: 6244 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.841 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.842 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133936 kB' 'MemAvailable: 9506972 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452716 kB' 'Inactive: 1258640 kB' 'Active(anon): 131872 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122976 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132252 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71272 kB' 'KernelStack: 6288 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.843 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.844 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:36.845 nr_hugepages=1024 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.845 resv_hugepages=0 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.845 surplus_hugepages=0 00:03:36.845 anon_hugepages=0 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133936 kB' 'MemAvailable: 9506972 kB' 'Buffers: 2436 kB' 'Cached: 1587512 kB' 'SwapCached: 0 kB' 'Active: 452780 kB' 'Inactive: 1258640 kB' 'Active(anon): 131936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123088 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 60980 kB' 'Slab: 132252 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71272 kB' 'KernelStack: 6304 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:36.845 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.106 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8133936 kB' 'MemUsed: 4108036 kB' 'SwapCached: 0 kB' 'Active: 452712 kB' 'Inactive: 1258640 kB' 'Active(anon): 131868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320844 kB' 'Inactive(file): 1258640 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1589948 kB' 'Mapped: 48732 kB' 'AnonPages: 122972 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60980 kB' 'Slab: 132252 kB' 'SReclaimable: 60980 kB' 'SUnreclaim: 71272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.107 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.108 node0=1024 expecting 1024 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.108 00:03:37.108 real 0m1.073s 00:03:37.108 user 0m0.515s 00:03:37.108 sys 0m0.588s 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.108 09:29:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:37.108 ************************************ 00:03:37.108 END TEST no_shrink_alloc 00:03:37.108 ************************************ 00:03:37.108 09:29:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:37.108 09:29:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:37.108 00:03:37.108 real 0m4.718s 00:03:37.108 user 0m2.174s 00:03:37.108 sys 0m2.522s 00:03:37.108 09:29:31 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.108 09:29:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.108 ************************************ 00:03:37.109 END TEST hugepages 00:03:37.109 ************************************ 00:03:37.109 09:29:31 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.109 09:29:31 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:37.109 09:29:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.109 09:29:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.109 09:29:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.109 ************************************ 00:03:37.109 START TEST driver 00:03:37.109 ************************************ 00:03:37.109 09:29:31 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:37.109 * Looking for test storage... 00:03:37.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:37.109 09:29:31 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:37.109 09:29:31 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.109 09:29:31 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.676 09:29:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:37.676 09:29:32 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.676 09:29:32 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.676 09:29:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.676 ************************************ 00:03:37.676 START TEST guess_driver 00:03:37.676 ************************************ 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:37.676 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:37.676 Looking for driver=uio_pci_generic 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.676 09:29:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.611 09:29:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.177 00:03:39.177 real 0m1.417s 00:03:39.177 user 0m0.511s 00:03:39.177 sys 0m0.893s 00:03:39.177 09:29:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.177 ************************************ 00:03:39.177 END TEST guess_driver 00:03:39.177 ************************************ 00:03:39.177 09:29:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.177 09:29:33 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:39.177 00:03:39.177 real 0m2.105s 00:03:39.177 user 0m0.741s 00:03:39.177 sys 0m1.387s 00:03:39.177 09:29:33 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.177 09:29:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:39.177 ************************************ 00:03:39.177 END TEST driver 00:03:39.177 ************************************ 00:03:39.177 09:29:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:39.177 09:29:33 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:39.177 09:29:33 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.177 09:29:33 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.177 09:29:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.177 ************************************ 00:03:39.177 START TEST devices 00:03:39.177 ************************************ 00:03:39.177 09:29:33 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:39.436 * Looking for test storage... 00:03:39.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.436 09:29:33 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:39.436 09:29:33 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:39.436 09:29:33 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.436 09:29:33 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:40.003 09:29:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.003 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:40.003 09:29:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:40.003 09:29:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:40.261 No valid GPT data, bailing 00:03:40.261 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.261 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.261 09:29:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.261 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:40.261 09:29:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:40.261 09:29:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:40.261 09:29:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:40.261 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.261 09:29:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.261 09:29:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:40.261 09:29:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.261 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:40.262 No valid GPT data, bailing 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:40.262 09:29:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:40.262 09:29:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:40.262 09:29:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:40.262 No valid GPT data, bailing 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:40.262 09:29:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:40.262 09:29:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:40.262 09:29:34 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:40.262 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:40.262 No valid GPT data, bailing 00:03:40.262 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:40.520 09:29:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:40.520 09:29:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:40.520 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:40.520 09:29:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:40.520 09:29:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:40.520 09:29:34 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:40.520 09:29:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:40.520 09:29:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:40.520 09:29:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:40.520 09:29:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:40.520 09:29:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:40.520 09:29:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:40.520 09:29:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.520 09:29:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.520 09:29:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.520 ************************************ 00:03:40.520 START TEST nvme_mount 00:03:40.520 ************************************ 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:40.521 09:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:41.458 Creating new GPT entries in memory. 00:03:41.458 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:41.458 other utilities. 00:03:41.458 09:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:41.458 09:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.458 09:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.458 09:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.458 09:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:42.393 Creating new GPT entries in memory. 00:03:42.393 The operation has completed successfully. 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57107 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.393 09:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.651 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.651 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:42.651 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:42.651 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.651 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.651 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:42.909 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.909 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:43.168 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:43.168 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:43.168 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:43.168 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.168 09:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.426 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.426 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:43.426 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.426 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.426 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.426 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.685 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.685 09:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.685 09:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:43.942 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.942 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:43.942 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.942 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.942 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:43.942 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.200 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.200 00:03:44.200 real 0m3.908s 00:03:44.200 user 0m0.684s 00:03:44.200 sys 0m0.973s 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.200 09:29:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:44.200 ************************************ 00:03:44.200 END TEST nvme_mount 00:03:44.200 ************************************ 00:03:44.458 09:29:38 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:44.458 09:29:38 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:44.458 09:29:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.458 09:29:38 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.458 09:29:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.458 ************************************ 00:03:44.458 START TEST dm_mount 00:03:44.458 ************************************ 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.458 09:29:38 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:45.393 Creating new GPT entries in memory. 00:03:45.393 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.393 other utilities. 00:03:45.393 09:29:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.393 09:29:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.393 09:29:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.393 09:29:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.393 09:29:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:46.328 Creating new GPT entries in memory. 00:03:46.328 The operation has completed successfully. 00:03:46.328 09:29:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.328 09:29:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.328 09:29:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.328 09:29:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.328 09:29:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:47.736 The operation has completed successfully. 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57540 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.736 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.737 09:29:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.737 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.995 09:29:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.253 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:48.511 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:48.511 00:03:48.511 real 0m4.138s 00:03:48.511 user 0m0.409s 00:03:48.511 sys 0m0.682s 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.511 09:29:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:48.511 ************************************ 00:03:48.511 END TEST dm_mount 00:03:48.511 ************************************ 00:03:48.511 09:29:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:48.511 09:29:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:48.511 09:29:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:48.511 09:29:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:48.511 09:29:42 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.511 09:29:42 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:48.511 09:29:42 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.511 09:29:42 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:48.769 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:48.769 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:48.769 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:48.769 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:48.769 09:29:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:48.769 09:29:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:48.769 09:29:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:48.769 09:29:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.769 09:29:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:48.769 09:29:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.769 09:29:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:48.769 00:03:48.769 real 0m9.566s 00:03:48.769 user 0m1.728s 00:03:48.769 sys 0m2.256s 00:03:48.769 09:29:43 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.769 09:29:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:48.769 ************************************ 00:03:48.769 END TEST devices 00:03:48.769 ************************************ 00:03:48.769 09:29:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:48.769 00:03:48.769 real 0m21.381s 00:03:48.769 user 0m6.761s 00:03:48.769 sys 0m8.939s 00:03:48.769 09:29:43 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.769 09:29:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.769 ************************************ 00:03:48.769 END TEST setup.sh 00:03:48.769 ************************************ 00:03:49.026 09:29:43 -- common/autotest_common.sh@1142 -- # return 0 00:03:49.026 09:29:43 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:49.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.593 Hugepages 00:03:49.593 node hugesize free / total 00:03:49.593 node0 1048576kB 0 / 0 00:03:49.593 node0 2048kB 2048 / 2048 00:03:49.593 00:03:49.593 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.593 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:49.593 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:49.593 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:49.593 09:29:44 -- spdk/autotest.sh@130 -- # uname -s 00:03:49.593 09:29:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:49.593 09:29:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:49.593 09:29:44 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.528 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.528 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.528 09:29:44 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:51.463 09:29:45 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:51.463 09:29:45 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:51.463 09:29:45 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:51.463 09:29:45 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:51.463 09:29:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:51.463 09:29:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:51.463 09:29:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.463 09:29:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:51.463 09:29:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:51.722 09:29:45 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:51.722 09:29:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:51.722 09:29:45 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.980 Waiting for block devices as requested 00:03:51.980 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:51.980 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:52.238 09:29:46 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:52.238 09:29:46 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:52.238 09:29:46 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:52.238 09:29:46 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:03:52.239 09:29:46 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:52.239 09:29:46 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:52.239 09:29:46 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:03:52.239 09:29:46 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:03:52.239 09:29:46 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:52.239 09:29:46 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:52.239 09:29:46 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:52.239 09:29:46 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1557 -- # continue 00:03:52.239 09:29:46 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:52.239 09:29:46 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:52.239 09:29:46 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:52.239 09:29:46 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:03:52.239 09:29:46 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:52.239 09:29:46 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:52.239 09:29:46 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:52.239 09:29:46 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:52.239 09:29:46 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:52.239 09:29:46 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:52.239 09:29:46 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:52.239 09:29:46 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:52.239 09:29:46 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:52.239 09:29:46 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:52.239 09:29:46 -- common/autotest_common.sh@1557 -- # continue 00:03:52.239 09:29:46 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:52.239 09:29:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:52.239 09:29:46 -- common/autotest_common.sh@10 -- # set +x 00:03:52.239 09:29:46 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:52.239 09:29:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.239 09:29:46 -- common/autotest_common.sh@10 -- # set +x 00:03:52.239 09:29:46 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.062 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.062 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:53.062 09:29:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:53.062 09:29:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.062 09:29:47 -- common/autotest_common.sh@10 -- # set +x 00:03:53.062 09:29:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:53.062 09:29:47 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:53.062 09:29:47 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.062 09:29:47 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:53.062 09:29:47 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:53.062 09:29:47 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:53.062 09:29:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:53.062 09:29:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:53.062 09:29:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.062 09:29:47 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.062 09:29:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:53.319 09:29:47 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:53.319 09:29:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:53.319 09:29:47 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:53.319 09:29:47 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:53.319 09:29:47 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:53.319 09:29:47 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.319 09:29:47 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:53.319 09:29:47 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:53.319 09:29:47 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:53.319 09:29:47 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:53.319 09:29:47 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:53.319 09:29:47 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:53.319 09:29:47 -- common/autotest_common.sh@1593 -- # return 0 00:03:53.319 09:29:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:53.319 09:29:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:53.319 09:29:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.319 09:29:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:53.319 09:29:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:53.319 09:29:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.319 09:29:47 -- common/autotest_common.sh@10 -- # set +x 00:03:53.319 09:29:47 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:03:53.319 09:29:47 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:03:53.319 09:29:47 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:03:53.319 09:29:47 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:53.319 09:29:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.319 09:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.319 09:29:47 -- common/autotest_common.sh@10 -- # set +x 00:03:53.319 ************************************ 00:03:53.319 START TEST env 00:03:53.319 ************************************ 00:03:53.319 09:29:47 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:53.319 * Looking for test storage... 00:03:53.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:53.319 09:29:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.319 09:29:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.319 09:29:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.319 09:29:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.319 ************************************ 00:03:53.319 START TEST env_memory 00:03:53.319 ************************************ 00:03:53.319 09:29:47 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:53.319 00:03:53.319 00:03:53.319 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.319 http://cunit.sourceforge.net/ 00:03:53.319 00:03:53.319 00:03:53.319 Suite: memory 00:03:53.319 Test: alloc and free memory map ...[2024-07-15 09:29:47.697366] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:53.319 passed 00:03:53.319 Test: mem map translation ...[2024-07-15 09:29:47.728575] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:53.319 [2024-07-15 09:29:47.728654] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:53.319 [2024-07-15 09:29:47.728751] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:53.319 [2024-07-15 09:29:47.728774] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.319 passed 00:03:53.576 Test: mem map registration ...[2024-07-15 09:29:47.794465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:53.576 [2024-07-15 09:29:47.794545] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:53.576 passed 00:03:53.576 Test: mem map adjacent registrations ...passed 00:03:53.576 00:03:53.576 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.576 suites 1 1 n/a 0 0 00:03:53.576 tests 4 4 4 0 0 00:03:53.576 asserts 152 152 152 0 n/a 00:03:53.576 00:03:53.576 Elapsed time = 0.219 seconds 00:03:53.576 00:03:53.576 real 0m0.236s 00:03:53.576 user 0m0.221s 00:03:53.576 sys 0m0.011s 00:03:53.576 09:29:47 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.576 09:29:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:53.576 ************************************ 00:03:53.576 END TEST env_memory 00:03:53.576 ************************************ 00:03:53.576 09:29:47 env -- common/autotest_common.sh@1142 -- # return 0 00:03:53.576 09:29:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.576 09:29:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.576 09:29:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.576 09:29:47 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.576 ************************************ 00:03:53.576 START TEST env_vtophys 00:03:53.576 ************************************ 00:03:53.576 09:29:47 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:53.576 EAL: lib.eal log level changed from notice to debug 00:03:53.576 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 1 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 2 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 3 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 4 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 5 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 6 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 7 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 8 as core 0 on socket 0 00:03:53.576 EAL: Detected lcore 9 as core 0 on socket 0 00:03:53.576 EAL: Maximum logical cores by configuration: 128 00:03:53.576 EAL: Detected CPU lcores: 10 00:03:53.576 EAL: Detected NUMA nodes: 1 00:03:53.576 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:53.576 EAL: Detected shared linkage of DPDK 00:03:53.576 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.576 EAL: Selected IOVA mode 'PA' 00:03:53.576 EAL: Probing VFIO support... 00:03:53.576 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.576 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:53.576 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.576 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.576 EAL: Setting up physically contiguous memory... 00:03:53.576 EAL: Setting maximum number of open files to 524288 00:03:53.576 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.576 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.576 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.576 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.576 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.576 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.576 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.576 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.576 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.576 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.576 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.576 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.576 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.576 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.576 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.576 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.576 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.576 EAL: Hugepages will be freed exactly as allocated. 00:03:53.576 EAL: No shared files mode enabled, IPC is disabled 00:03:53.576 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: TSC frequency is ~2200000 KHz 00:03:53.833 EAL: Main lcore 0 is ready (tid=7fbe26e38a00;cpuset=[0]) 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 0 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.833 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:53.833 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:53.833 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.833 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:53.833 00:03:53.833 00:03:53.833 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.833 http://cunit.sourceforge.net/ 00:03:53.833 00:03:53.833 00:03:53.833 Suite: components_suite 00:03:53.833 Test: vtophys_malloc_test ...passed 00:03:53.833 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 4 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 4 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 4 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 4 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 4 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 4 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 66MB 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was shrunk by 66MB 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.833 EAL: Restoring previous memory policy: 4 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was expanded by 130MB 00:03:53.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.833 EAL: request: mp_malloc_sync 00:03:53.833 EAL: No shared files mode enabled, IPC is disabled 00:03:53.833 EAL: Heap on socket 0 was shrunk by 130MB 00:03:53.833 EAL: Trying to obtain current memory policy. 00:03:53.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.090 EAL: Restoring previous memory policy: 4 00:03:54.090 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.090 EAL: request: mp_malloc_sync 00:03:54.090 EAL: No shared files mode enabled, IPC is disabled 00:03:54.090 EAL: Heap on socket 0 was expanded by 258MB 00:03:54.090 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.090 EAL: request: mp_malloc_sync 00:03:54.090 EAL: No shared files mode enabled, IPC is disabled 00:03:54.090 EAL: Heap on socket 0 was shrunk by 258MB 00:03:54.090 EAL: Trying to obtain current memory policy. 00:03:54.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.090 EAL: Restoring previous memory policy: 4 00:03:54.090 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.090 EAL: request: mp_malloc_sync 00:03:54.090 EAL: No shared files mode enabled, IPC is disabled 00:03:54.091 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.385 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.385 EAL: request: mp_malloc_sync 00:03:54.385 EAL: No shared files mode enabled, IPC is disabled 00:03:54.385 EAL: Heap on socket 0 was shrunk by 514MB 00:03:54.385 EAL: Trying to obtain current memory policy. 00:03:54.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.644 EAL: Restoring previous memory policy: 4 00:03:54.644 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.644 EAL: request: mp_malloc_sync 00:03:54.644 EAL: No shared files mode enabled, IPC is disabled 00:03:54.644 EAL: Heap on socket 0 was expanded by 1026MB 00:03:54.902 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.160 passed 00:03:55.160 00:03:55.160 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.160 suites 1 1 n/a 0 0 00:03:55.160 tests 2 2 2 0 0 00:03:55.160 asserts 5281 5281 5281 0 n/a 00:03:55.160 00:03:55.160 Elapsed time = 1.247 seconds 00:03:55.160 EAL: request: mp_malloc_sync 00:03:55.160 EAL: No shared files mode enabled, IPC is disabled 00:03:55.160 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.160 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.160 EAL: request: mp_malloc_sync 00:03:55.160 EAL: No shared files mode enabled, IPC is disabled 00:03:55.160 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.160 EAL: No shared files mode enabled, IPC is disabled 00:03:55.160 EAL: No shared files mode enabled, IPC is disabled 00:03:55.160 EAL: No shared files mode enabled, IPC is disabled 00:03:55.160 00:03:55.160 real 0m1.451s 00:03:55.160 user 0m0.784s 00:03:55.160 sys 0m0.532s 00:03:55.160 09:29:49 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.160 ************************************ 00:03:55.160 END TEST env_vtophys 00:03:55.160 09:29:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.160 ************************************ 00:03:55.160 09:29:49 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.160 09:29:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.160 09:29:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.160 09:29:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.160 09:29:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.160 ************************************ 00:03:55.160 START TEST env_pci 00:03:55.160 ************************************ 00:03:55.160 09:29:49 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:55.160 00:03:55.160 00:03:55.160 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.160 http://cunit.sourceforge.net/ 00:03:55.160 00:03:55.160 00:03:55.160 Suite: pci 00:03:55.160 Test: pci_hook ...[2024-07-15 09:29:49.447263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58726 has claimed it 00:03:55.160 passed 00:03:55.160 00:03:55.160 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.160 suites 1 1 n/a 0 0 00:03:55.160 tests 1 1 1 0 0 00:03:55.160 asserts 25 25 25 0 n/a 00:03:55.161 00:03:55.161 Elapsed time = 0.002 seconds 00:03:55.161 EAL: Cannot find device (10000:00:01.0) 00:03:55.161 EAL: Failed to attach device on primary process 00:03:55.161 00:03:55.161 real 0m0.024s 00:03:55.161 user 0m0.011s 00:03:55.161 sys 0m0.011s 00:03:55.161 09:29:49 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.161 ************************************ 00:03:55.161 END TEST env_pci 00:03:55.161 ************************************ 00:03:55.161 09:29:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.161 09:29:49 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.161 09:29:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.161 09:29:49 env -- env/env.sh@15 -- # uname 00:03:55.161 09:29:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.161 09:29:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.161 09:29:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.161 09:29:49 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:55.161 09:29:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.161 09:29:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.161 ************************************ 00:03:55.161 START TEST env_dpdk_post_init 00:03:55.161 ************************************ 00:03:55.161 09:29:49 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.161 EAL: Detected CPU lcores: 10 00:03:55.161 EAL: Detected NUMA nodes: 1 00:03:55.161 EAL: Detected shared linkage of DPDK 00:03:55.161 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.161 EAL: Selected IOVA mode 'PA' 00:03:55.418 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.418 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:55.418 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:55.418 Starting DPDK initialization... 00:03:55.418 Starting SPDK post initialization... 00:03:55.418 SPDK NVMe probe 00:03:55.418 Attaching to 0000:00:10.0 00:03:55.418 Attaching to 0000:00:11.0 00:03:55.418 Attached to 0000:00:10.0 00:03:55.418 Attached to 0000:00:11.0 00:03:55.418 Cleaning up... 00:03:55.418 00:03:55.418 real 0m0.175s 00:03:55.418 user 0m0.045s 00:03:55.418 sys 0m0.030s 00:03:55.418 09:29:49 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.418 09:29:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.418 ************************************ 00:03:55.418 END TEST env_dpdk_post_init 00:03:55.418 ************************************ 00:03:55.418 09:29:49 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.418 09:29:49 env -- env/env.sh@26 -- # uname 00:03:55.418 09:29:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.418 09:29:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.418 09:29:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.418 09:29:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.418 09:29:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.418 ************************************ 00:03:55.418 START TEST env_mem_callbacks 00:03:55.418 ************************************ 00:03:55.418 09:29:49 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.418 EAL: Detected CPU lcores: 10 00:03:55.418 EAL: Detected NUMA nodes: 1 00:03:55.418 EAL: Detected shared linkage of DPDK 00:03:55.418 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.418 EAL: Selected IOVA mode 'PA' 00:03:55.418 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.418 00:03:55.418 00:03:55.418 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.418 http://cunit.sourceforge.net/ 00:03:55.418 00:03:55.418 00:03:55.418 Suite: memory 00:03:55.418 Test: test ... 00:03:55.418 register 0x200000200000 2097152 00:03:55.418 malloc 3145728 00:03:55.418 register 0x200000400000 4194304 00:03:55.418 buf 0x200000500000 len 3145728 PASSED 00:03:55.418 malloc 64 00:03:55.418 buf 0x2000004fff40 len 64 PASSED 00:03:55.418 malloc 4194304 00:03:55.418 register 0x200000800000 6291456 00:03:55.418 buf 0x200000a00000 len 4194304 PASSED 00:03:55.418 free 0x200000500000 3145728 00:03:55.418 free 0x2000004fff40 64 00:03:55.418 unregister 0x200000400000 4194304 PASSED 00:03:55.418 free 0x200000a00000 4194304 00:03:55.418 unregister 0x200000800000 6291456 PASSED 00:03:55.418 malloc 8388608 00:03:55.418 register 0x200000400000 10485760 00:03:55.419 buf 0x200000600000 len 8388608 PASSED 00:03:55.419 free 0x200000600000 8388608 00:03:55.419 unregister 0x200000400000 10485760 PASSED 00:03:55.419 passed 00:03:55.419 00:03:55.419 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.419 suites 1 1 n/a 0 0 00:03:55.419 tests 1 1 1 0 0 00:03:55.419 asserts 15 15 15 0 n/a 00:03:55.419 00:03:55.419 Elapsed time = 0.008 seconds 00:03:55.419 00:03:55.419 real 0m0.144s 00:03:55.419 user 0m0.020s 00:03:55.419 sys 0m0.023s 00:03:55.419 09:29:49 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.419 09:29:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:55.419 ************************************ 00:03:55.419 END TEST env_mem_callbacks 00:03:55.419 ************************************ 00:03:55.676 09:29:49 env -- common/autotest_common.sh@1142 -- # return 0 00:03:55.676 00:03:55.676 real 0m2.352s 00:03:55.676 user 0m1.193s 00:03:55.676 sys 0m0.801s 00:03:55.676 09:29:49 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.676 ************************************ 00:03:55.676 09:29:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.676 END TEST env 00:03:55.676 ************************************ 00:03:55.676 09:29:49 -- common/autotest_common.sh@1142 -- # return 0 00:03:55.676 09:29:49 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.676 09:29:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.676 09:29:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.676 09:29:49 -- common/autotest_common.sh@10 -- # set +x 00:03:55.676 ************************************ 00:03:55.676 START TEST rpc 00:03:55.676 ************************************ 00:03:55.676 09:29:49 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:55.677 * Looking for test storage... 00:03:55.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:55.677 09:29:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58837 00:03:55.677 09:29:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.677 09:29:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58837 00:03:55.677 09:29:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:55.677 09:29:50 rpc -- common/autotest_common.sh@829 -- # '[' -z 58837 ']' 00:03:55.677 09:29:50 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.677 09:29:50 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.677 09:29:50 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.677 09:29:50 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.677 09:29:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.677 [2024-07-15 09:29:50.106390] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:03:55.677 [2024-07-15 09:29:50.106491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:03:55.937 [2024-07-15 09:29:50.244765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.937 [2024-07-15 09:29:50.381630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:55.937 [2024-07-15 09:29:50.381699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58837' to capture a snapshot of events at runtime. 00:03:55.937 [2024-07-15 09:29:50.381714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:55.937 [2024-07-15 09:29:50.381726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:55.937 [2024-07-15 09:29:50.381735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58837 for offline analysis/debug. 00:03:55.937 [2024-07-15 09:29:50.381782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.207 [2024-07-15 09:29:50.439102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:03:56.773 09:29:51 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:56.773 09:29:51 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:56.773 09:29:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.773 09:29:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:56.773 09:29:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:56.773 09:29:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:56.773 09:29:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.773 09:29:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.773 09:29:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.773 ************************************ 00:03:56.773 START TEST rpc_integrity 00:03:56.773 ************************************ 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:56.773 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:56.773 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:56.773 { 00:03:56.773 "name": "Malloc0", 00:03:56.773 "aliases": [ 00:03:56.773 "c6d67dd3-2d13-4888-a7ae-161124b3e5e5" 00:03:56.773 ], 00:03:56.773 "product_name": "Malloc disk", 00:03:56.773 "block_size": 512, 00:03:56.773 "num_blocks": 16384, 00:03:56.773 "uuid": "c6d67dd3-2d13-4888-a7ae-161124b3e5e5", 00:03:56.773 "assigned_rate_limits": { 00:03:56.773 "rw_ios_per_sec": 0, 00:03:56.773 "rw_mbytes_per_sec": 0, 00:03:56.773 "r_mbytes_per_sec": 0, 00:03:56.773 "w_mbytes_per_sec": 0 00:03:56.773 }, 00:03:56.773 "claimed": false, 00:03:56.773 "zoned": false, 00:03:56.773 "supported_io_types": { 00:03:56.773 "read": true, 00:03:56.773 "write": true, 00:03:56.773 "unmap": true, 00:03:56.773 "flush": true, 00:03:56.773 "reset": true, 00:03:56.773 "nvme_admin": false, 00:03:56.773 "nvme_io": false, 00:03:56.773 "nvme_io_md": false, 00:03:56.773 "write_zeroes": true, 00:03:56.773 "zcopy": true, 00:03:56.773 "get_zone_info": false, 00:03:56.773 "zone_management": false, 00:03:56.773 "zone_append": false, 00:03:56.773 "compare": false, 00:03:56.773 "compare_and_write": false, 00:03:56.773 "abort": true, 00:03:56.773 "seek_hole": false, 00:03:56.773 "seek_data": false, 00:03:56.773 "copy": true, 00:03:56.773 "nvme_iov_md": false 00:03:56.773 }, 00:03:56.774 "memory_domains": [ 00:03:56.774 { 00:03:56.774 "dma_device_id": "system", 00:03:56.774 "dma_device_type": 1 00:03:56.774 }, 00:03:56.774 { 00:03:56.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:56.774 "dma_device_type": 2 00:03:56.774 } 00:03:56.774 ], 00:03:56.774 "driver_specific": {} 00:03:56.774 } 00:03:56.774 ]' 00:03:56.774 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:56.774 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.032 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:57.032 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.032 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.032 [2024-07-15 09:29:51.242909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:57.032 [2024-07-15 09:29:51.242966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.032 [2024-07-15 09:29:51.242991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x96fda0 00:03:57.032 [2024-07-15 09:29:51.243001] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.032 [2024-07-15 09:29:51.244732] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.032 [2024-07-15 09:29:51.244771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.032 Passthru0 00:03:57.032 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.032 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.032 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.032 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.032 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.032 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.032 { 00:03:57.032 "name": "Malloc0", 00:03:57.032 "aliases": [ 00:03:57.032 "c6d67dd3-2d13-4888-a7ae-161124b3e5e5" 00:03:57.032 ], 00:03:57.032 "product_name": "Malloc disk", 00:03:57.032 "block_size": 512, 00:03:57.032 "num_blocks": 16384, 00:03:57.032 "uuid": "c6d67dd3-2d13-4888-a7ae-161124b3e5e5", 00:03:57.032 "assigned_rate_limits": { 00:03:57.032 "rw_ios_per_sec": 0, 00:03:57.032 "rw_mbytes_per_sec": 0, 00:03:57.032 "r_mbytes_per_sec": 0, 00:03:57.032 "w_mbytes_per_sec": 0 00:03:57.032 }, 00:03:57.032 "claimed": true, 00:03:57.032 "claim_type": "exclusive_write", 00:03:57.032 "zoned": false, 00:03:57.032 "supported_io_types": { 00:03:57.032 "read": true, 00:03:57.032 "write": true, 00:03:57.032 "unmap": true, 00:03:57.032 "flush": true, 00:03:57.032 "reset": true, 00:03:57.032 "nvme_admin": false, 00:03:57.032 "nvme_io": false, 00:03:57.032 "nvme_io_md": false, 00:03:57.032 "write_zeroes": true, 00:03:57.032 "zcopy": true, 00:03:57.032 "get_zone_info": false, 00:03:57.032 "zone_management": false, 00:03:57.032 "zone_append": false, 00:03:57.032 "compare": false, 00:03:57.032 "compare_and_write": false, 00:03:57.032 "abort": true, 00:03:57.032 "seek_hole": false, 00:03:57.033 "seek_data": false, 00:03:57.033 "copy": true, 00:03:57.033 "nvme_iov_md": false 00:03:57.033 }, 00:03:57.033 "memory_domains": [ 00:03:57.033 { 00:03:57.033 "dma_device_id": "system", 00:03:57.033 "dma_device_type": 1 00:03:57.033 }, 00:03:57.033 { 00:03:57.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.033 "dma_device_type": 2 00:03:57.033 } 00:03:57.033 ], 00:03:57.033 "driver_specific": {} 00:03:57.033 }, 00:03:57.033 { 00:03:57.033 "name": "Passthru0", 00:03:57.033 "aliases": [ 00:03:57.033 "c1fe9308-b179-5bc6-975e-1fa10bc3e043" 00:03:57.033 ], 00:03:57.033 "product_name": "passthru", 00:03:57.033 "block_size": 512, 00:03:57.033 "num_blocks": 16384, 00:03:57.033 "uuid": "c1fe9308-b179-5bc6-975e-1fa10bc3e043", 00:03:57.033 "assigned_rate_limits": { 00:03:57.033 "rw_ios_per_sec": 0, 00:03:57.033 "rw_mbytes_per_sec": 0, 00:03:57.033 "r_mbytes_per_sec": 0, 00:03:57.033 "w_mbytes_per_sec": 0 00:03:57.033 }, 00:03:57.033 "claimed": false, 00:03:57.033 "zoned": false, 00:03:57.033 "supported_io_types": { 00:03:57.033 "read": true, 00:03:57.033 "write": true, 00:03:57.033 "unmap": true, 00:03:57.033 "flush": true, 00:03:57.033 "reset": true, 00:03:57.033 "nvme_admin": false, 00:03:57.033 "nvme_io": false, 00:03:57.033 "nvme_io_md": false, 00:03:57.033 "write_zeroes": true, 00:03:57.033 "zcopy": true, 00:03:57.033 "get_zone_info": false, 00:03:57.033 "zone_management": false, 00:03:57.033 "zone_append": false, 00:03:57.033 "compare": false, 00:03:57.033 "compare_and_write": false, 00:03:57.033 "abort": true, 00:03:57.033 "seek_hole": false, 00:03:57.033 "seek_data": false, 00:03:57.033 "copy": true, 00:03:57.033 "nvme_iov_md": false 00:03:57.033 }, 00:03:57.033 "memory_domains": [ 00:03:57.033 { 00:03:57.033 "dma_device_id": "system", 00:03:57.033 "dma_device_type": 1 00:03:57.033 }, 00:03:57.033 { 00:03:57.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.033 "dma_device_type": 2 00:03:57.033 } 00:03:57.033 ], 00:03:57.033 "driver_specific": { 00:03:57.033 "passthru": { 00:03:57.033 "name": "Passthru0", 00:03:57.033 "base_bdev_name": "Malloc0" 00:03:57.033 } 00:03:57.033 } 00:03:57.033 } 00:03:57.033 ]' 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.033 09:29:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.033 00:03:57.033 real 0m0.292s 00:03:57.033 user 0m0.192s 00:03:57.033 sys 0m0.034s 00:03:57.033 ************************************ 00:03:57.033 END TEST rpc_integrity 00:03:57.033 ************************************ 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.033 09:29:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 09:29:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.033 09:29:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:57.033 09:29:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.033 09:29:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.033 09:29:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 ************************************ 00:03:57.033 START TEST rpc_plugins 00:03:57.033 ************************************ 00:03:57.033 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:57.033 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.033 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.033 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.033 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.033 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.033 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.033 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.033 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.033 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.033 { 00:03:57.033 "name": "Malloc1", 00:03:57.033 "aliases": [ 00:03:57.033 "6adee912-9697-4d2c-9cb4-05efe40a66a1" 00:03:57.033 ], 00:03:57.033 "product_name": "Malloc disk", 00:03:57.033 "block_size": 4096, 00:03:57.033 "num_blocks": 256, 00:03:57.033 "uuid": "6adee912-9697-4d2c-9cb4-05efe40a66a1", 00:03:57.033 "assigned_rate_limits": { 00:03:57.033 "rw_ios_per_sec": 0, 00:03:57.033 "rw_mbytes_per_sec": 0, 00:03:57.033 "r_mbytes_per_sec": 0, 00:03:57.033 "w_mbytes_per_sec": 0 00:03:57.033 }, 00:03:57.033 "claimed": false, 00:03:57.033 "zoned": false, 00:03:57.033 "supported_io_types": { 00:03:57.033 "read": true, 00:03:57.033 "write": true, 00:03:57.033 "unmap": true, 00:03:57.033 "flush": true, 00:03:57.033 "reset": true, 00:03:57.033 "nvme_admin": false, 00:03:57.033 "nvme_io": false, 00:03:57.033 "nvme_io_md": false, 00:03:57.033 "write_zeroes": true, 00:03:57.033 "zcopy": true, 00:03:57.033 "get_zone_info": false, 00:03:57.033 "zone_management": false, 00:03:57.033 "zone_append": false, 00:03:57.033 "compare": false, 00:03:57.033 "compare_and_write": false, 00:03:57.033 "abort": true, 00:03:57.033 "seek_hole": false, 00:03:57.033 "seek_data": false, 00:03:57.033 "copy": true, 00:03:57.033 "nvme_iov_md": false 00:03:57.033 }, 00:03:57.033 "memory_domains": [ 00:03:57.033 { 00:03:57.033 "dma_device_id": "system", 00:03:57.033 "dma_device_type": 1 00:03:57.033 }, 00:03:57.033 { 00:03:57.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.033 "dma_device_type": 2 00:03:57.033 } 00:03:57.033 ], 00:03:57.033 "driver_specific": {} 00:03:57.033 } 00:03:57.033 ]' 00:03:57.033 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:57.292 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.292 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.292 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.292 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.292 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:57.292 09:29:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:57.292 00:03:57.292 real 0m0.158s 00:03:57.292 user 0m0.108s 00:03:57.292 sys 0m0.017s 00:03:57.292 ************************************ 00:03:57.292 END TEST rpc_plugins 00:03:57.292 ************************************ 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.292 09:29:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.292 09:29:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.292 09:29:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:57.292 09:29:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.292 09:29:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.292 09:29:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.292 ************************************ 00:03:57.292 START TEST rpc_trace_cmd_test 00:03:57.292 ************************************ 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:57.292 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58837", 00:03:57.292 "tpoint_group_mask": "0x8", 00:03:57.292 "iscsi_conn": { 00:03:57.292 "mask": "0x2", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "scsi": { 00:03:57.292 "mask": "0x4", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "bdev": { 00:03:57.292 "mask": "0x8", 00:03:57.292 "tpoint_mask": "0xffffffffffffffff" 00:03:57.292 }, 00:03:57.292 "nvmf_rdma": { 00:03:57.292 "mask": "0x10", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "nvmf_tcp": { 00:03:57.292 "mask": "0x20", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "ftl": { 00:03:57.292 "mask": "0x40", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "blobfs": { 00:03:57.292 "mask": "0x80", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "dsa": { 00:03:57.292 "mask": "0x200", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "thread": { 00:03:57.292 "mask": "0x400", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "nvme_pcie": { 00:03:57.292 "mask": "0x800", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "iaa": { 00:03:57.292 "mask": "0x1000", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "nvme_tcp": { 00:03:57.292 "mask": "0x2000", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "bdev_nvme": { 00:03:57.292 "mask": "0x4000", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 }, 00:03:57.292 "sock": { 00:03:57.292 "mask": "0x8000", 00:03:57.292 "tpoint_mask": "0x0" 00:03:57.292 } 00:03:57.292 }' 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:57.292 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:57.550 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:57.550 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:57.550 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:57.550 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:57.550 09:29:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:57.550 00:03:57.550 real 0m0.231s 00:03:57.550 user 0m0.205s 00:03:57.550 sys 0m0.019s 00:03:57.550 ************************************ 00:03:57.550 END TEST rpc_trace_cmd_test 00:03:57.550 ************************************ 00:03:57.550 09:29:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.550 09:29:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.550 09:29:51 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.550 09:29:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:57.550 09:29:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:57.550 09:29:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:57.550 09:29:51 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.550 09:29:51 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.550 09:29:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.550 ************************************ 00:03:57.550 START TEST rpc_daemon_integrity 00:03:57.550 ************************************ 00:03:57.550 09:29:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:57.550 09:29:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.550 09:29:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.550 09:29:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.551 09:29:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.551 09:29:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.551 09:29:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.551 09:29:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.551 09:29:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.551 09:29:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.551 09:29:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.551 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.551 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:57.551 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.551 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.551 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.834 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.834 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.834 { 00:03:57.834 "name": "Malloc2", 00:03:57.834 "aliases": [ 00:03:57.834 "a5d32c5c-afb7-44ab-b8dd-fca6bbf0a146" 00:03:57.834 ], 00:03:57.834 "product_name": "Malloc disk", 00:03:57.834 "block_size": 512, 00:03:57.834 "num_blocks": 16384, 00:03:57.834 "uuid": "a5d32c5c-afb7-44ab-b8dd-fca6bbf0a146", 00:03:57.834 "assigned_rate_limits": { 00:03:57.834 "rw_ios_per_sec": 0, 00:03:57.834 "rw_mbytes_per_sec": 0, 00:03:57.834 "r_mbytes_per_sec": 0, 00:03:57.834 "w_mbytes_per_sec": 0 00:03:57.834 }, 00:03:57.834 "claimed": false, 00:03:57.834 "zoned": false, 00:03:57.834 "supported_io_types": { 00:03:57.834 "read": true, 00:03:57.834 "write": true, 00:03:57.834 "unmap": true, 00:03:57.834 "flush": true, 00:03:57.834 "reset": true, 00:03:57.834 "nvme_admin": false, 00:03:57.834 "nvme_io": false, 00:03:57.834 "nvme_io_md": false, 00:03:57.834 "write_zeroes": true, 00:03:57.834 "zcopy": true, 00:03:57.834 "get_zone_info": false, 00:03:57.834 "zone_management": false, 00:03:57.834 "zone_append": false, 00:03:57.834 "compare": false, 00:03:57.835 "compare_and_write": false, 00:03:57.835 "abort": true, 00:03:57.835 "seek_hole": false, 00:03:57.835 "seek_data": false, 00:03:57.835 "copy": true, 00:03:57.835 "nvme_iov_md": false 00:03:57.835 }, 00:03:57.835 "memory_domains": [ 00:03:57.835 { 00:03:57.835 "dma_device_id": "system", 00:03:57.835 "dma_device_type": 1 00:03:57.835 }, 00:03:57.835 { 00:03:57.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.835 "dma_device_type": 2 00:03:57.835 } 00:03:57.835 ], 00:03:57.835 "driver_specific": {} 00:03:57.835 } 00:03:57.835 ]' 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.835 [2024-07-15 09:29:52.083556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:57.835 [2024-07-15 09:29:52.083617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.835 [2024-07-15 09:29:52.083643] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9d4be0 00:03:57.835 [2024-07-15 09:29:52.083654] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.835 [2024-07-15 09:29:52.085346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.835 [2024-07-15 09:29:52.085386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.835 Passthru0 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.835 { 00:03:57.835 "name": "Malloc2", 00:03:57.835 "aliases": [ 00:03:57.835 "a5d32c5c-afb7-44ab-b8dd-fca6bbf0a146" 00:03:57.835 ], 00:03:57.835 "product_name": "Malloc disk", 00:03:57.835 "block_size": 512, 00:03:57.835 "num_blocks": 16384, 00:03:57.835 "uuid": "a5d32c5c-afb7-44ab-b8dd-fca6bbf0a146", 00:03:57.835 "assigned_rate_limits": { 00:03:57.835 "rw_ios_per_sec": 0, 00:03:57.835 "rw_mbytes_per_sec": 0, 00:03:57.835 "r_mbytes_per_sec": 0, 00:03:57.835 "w_mbytes_per_sec": 0 00:03:57.835 }, 00:03:57.835 "claimed": true, 00:03:57.835 "claim_type": "exclusive_write", 00:03:57.835 "zoned": false, 00:03:57.835 "supported_io_types": { 00:03:57.835 "read": true, 00:03:57.835 "write": true, 00:03:57.835 "unmap": true, 00:03:57.835 "flush": true, 00:03:57.835 "reset": true, 00:03:57.835 "nvme_admin": false, 00:03:57.835 "nvme_io": false, 00:03:57.835 "nvme_io_md": false, 00:03:57.835 "write_zeroes": true, 00:03:57.835 "zcopy": true, 00:03:57.835 "get_zone_info": false, 00:03:57.835 "zone_management": false, 00:03:57.835 "zone_append": false, 00:03:57.835 "compare": false, 00:03:57.835 "compare_and_write": false, 00:03:57.835 "abort": true, 00:03:57.835 "seek_hole": false, 00:03:57.835 "seek_data": false, 00:03:57.835 "copy": true, 00:03:57.835 "nvme_iov_md": false 00:03:57.835 }, 00:03:57.835 "memory_domains": [ 00:03:57.835 { 00:03:57.835 "dma_device_id": "system", 00:03:57.835 "dma_device_type": 1 00:03:57.835 }, 00:03:57.835 { 00:03:57.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.835 "dma_device_type": 2 00:03:57.835 } 00:03:57.835 ], 00:03:57.835 "driver_specific": {} 00:03:57.835 }, 00:03:57.835 { 00:03:57.835 "name": "Passthru0", 00:03:57.835 "aliases": [ 00:03:57.835 "39dd4aa3-530b-53d8-b440-28bcda5eb304" 00:03:57.835 ], 00:03:57.835 "product_name": "passthru", 00:03:57.835 "block_size": 512, 00:03:57.835 "num_blocks": 16384, 00:03:57.835 "uuid": "39dd4aa3-530b-53d8-b440-28bcda5eb304", 00:03:57.835 "assigned_rate_limits": { 00:03:57.835 "rw_ios_per_sec": 0, 00:03:57.835 "rw_mbytes_per_sec": 0, 00:03:57.835 "r_mbytes_per_sec": 0, 00:03:57.835 "w_mbytes_per_sec": 0 00:03:57.835 }, 00:03:57.835 "claimed": false, 00:03:57.835 "zoned": false, 00:03:57.835 "supported_io_types": { 00:03:57.835 "read": true, 00:03:57.835 "write": true, 00:03:57.835 "unmap": true, 00:03:57.835 "flush": true, 00:03:57.835 "reset": true, 00:03:57.835 "nvme_admin": false, 00:03:57.835 "nvme_io": false, 00:03:57.835 "nvme_io_md": false, 00:03:57.835 "write_zeroes": true, 00:03:57.835 "zcopy": true, 00:03:57.835 "get_zone_info": false, 00:03:57.835 "zone_management": false, 00:03:57.835 "zone_append": false, 00:03:57.835 "compare": false, 00:03:57.835 "compare_and_write": false, 00:03:57.835 "abort": true, 00:03:57.835 "seek_hole": false, 00:03:57.835 "seek_data": false, 00:03:57.835 "copy": true, 00:03:57.835 "nvme_iov_md": false 00:03:57.835 }, 00:03:57.835 "memory_domains": [ 00:03:57.835 { 00:03:57.835 "dma_device_id": "system", 00:03:57.835 "dma_device_type": 1 00:03:57.835 }, 00:03:57.835 { 00:03:57.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.835 "dma_device_type": 2 00:03:57.835 } 00:03:57.835 ], 00:03:57.835 "driver_specific": { 00:03:57.835 "passthru": { 00:03:57.835 "name": "Passthru0", 00:03:57.835 "base_bdev_name": "Malloc2" 00:03:57.835 } 00:03:57.835 } 00:03:57.835 } 00:03:57.835 ]' 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.835 00:03:57.835 real 0m0.325s 00:03:57.835 user 0m0.230s 00:03:57.835 sys 0m0.034s 00:03:57.835 ************************************ 00:03:57.835 END TEST rpc_daemon_integrity 00:03:57.835 ************************************ 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.835 09:29:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.835 09:29:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.835 09:29:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:57.835 09:29:52 rpc -- rpc/rpc.sh@84 -- # killprocess 58837 00:03:57.835 09:29:52 rpc -- common/autotest_common.sh@948 -- # '[' -z 58837 ']' 00:03:57.835 09:29:52 rpc -- common/autotest_common.sh@952 -- # kill -0 58837 00:03:57.835 09:29:52 rpc -- common/autotest_common.sh@953 -- # uname 00:03:57.835 09:29:52 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:57.835 09:29:52 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58837 00:03:58.093 09:29:52 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:58.093 09:29:52 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:58.093 killing process with pid 58837 00:03:58.093 09:29:52 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58837' 00:03:58.093 09:29:52 rpc -- common/autotest_common.sh@967 -- # kill 58837 00:03:58.093 09:29:52 rpc -- common/autotest_common.sh@972 -- # wait 58837 00:03:58.351 00:03:58.351 real 0m2.753s 00:03:58.351 user 0m3.590s 00:03:58.351 sys 0m0.622s 00:03:58.351 09:29:52 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.351 09:29:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.351 ************************************ 00:03:58.351 END TEST rpc 00:03:58.351 ************************************ 00:03:58.351 09:29:52 -- common/autotest_common.sh@1142 -- # return 0 00:03:58.351 09:29:52 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:58.351 09:29:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.351 09:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.351 09:29:52 -- common/autotest_common.sh@10 -- # set +x 00:03:58.351 ************************************ 00:03:58.351 START TEST skip_rpc 00:03:58.351 ************************************ 00:03:58.351 09:29:52 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:58.609 * Looking for test storage... 00:03:58.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:58.609 09:29:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:58.609 09:29:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:58.609 09:29:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:58.609 09:29:52 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.609 09:29:52 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.609 09:29:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.609 ************************************ 00:03:58.609 START TEST skip_rpc 00:03:58.609 ************************************ 00:03:58.609 09:29:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:58.609 09:29:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59035 00:03:58.609 09:29:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.609 09:29:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:58.609 09:29:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:58.609 [2024-07-15 09:29:52.911730] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:03:58.609 [2024-07-15 09:29:52.911815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:03:58.609 [2024-07-15 09:29:53.043744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.868 [2024-07-15 09:29:53.160015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.868 [2024-07-15 09:29:53.212771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59035 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59035 ']' 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59035 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59035 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.199 killing process with pid 59035 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59035' 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59035 00:04:04.199 09:29:57 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59035 00:04:04.199 00:04:04.199 real 0m5.418s 00:04:04.199 user 0m5.061s 00:04:04.199 sys 0m0.255s 00:04:04.199 09:29:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.199 ************************************ 00:04:04.199 END TEST skip_rpc 00:04:04.199 ************************************ 00:04:04.199 09:29:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 09:29:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:04.199 09:29:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:04.199 09:29:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.199 09:29:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.199 09:29:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 ************************************ 00:04:04.199 START TEST skip_rpc_with_json 00:04:04.199 ************************************ 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59116 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59116 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59116 ']' 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.199 09:29:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 [2024-07-15 09:29:58.377643] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:04.199 [2024-07-15 09:29:58.377729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59116 ] 00:04:04.199 [2024-07-15 09:29:58.513612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.199 [2024-07-15 09:29:58.642300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.458 [2024-07-15 09:29:58.695648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.025 [2024-07-15 09:29:59.435400] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:05.025 request: 00:04:05.025 { 00:04:05.025 "trtype": "tcp", 00:04:05.025 "method": "nvmf_get_transports", 00:04:05.025 "req_id": 1 00:04:05.025 } 00:04:05.025 Got JSON-RPC error response 00:04:05.025 response: 00:04:05.025 { 00:04:05.025 "code": -19, 00:04:05.025 "message": "No such device" 00:04:05.025 } 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.025 [2024-07-15 09:29:59.447523] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.025 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.283 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.283 09:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:05.283 { 00:04:05.283 "subsystems": [ 00:04:05.283 { 00:04:05.283 "subsystem": "keyring", 00:04:05.283 "config": [] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "iobuf", 00:04:05.283 "config": [ 00:04:05.283 { 00:04:05.283 "method": "iobuf_set_options", 00:04:05.283 "params": { 00:04:05.283 "small_pool_count": 8192, 00:04:05.283 "large_pool_count": 1024, 00:04:05.283 "small_bufsize": 8192, 00:04:05.283 "large_bufsize": 135168 00:04:05.283 } 00:04:05.283 } 00:04:05.283 ] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "sock", 00:04:05.283 "config": [ 00:04:05.283 { 00:04:05.283 "method": "sock_set_default_impl", 00:04:05.283 "params": { 00:04:05.283 "impl_name": "uring" 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "sock_impl_set_options", 00:04:05.283 "params": { 00:04:05.283 "impl_name": "ssl", 00:04:05.283 "recv_buf_size": 4096, 00:04:05.283 "send_buf_size": 4096, 00:04:05.283 "enable_recv_pipe": true, 00:04:05.283 "enable_quickack": false, 00:04:05.283 "enable_placement_id": 0, 00:04:05.283 "enable_zerocopy_send_server": true, 00:04:05.283 "enable_zerocopy_send_client": false, 00:04:05.283 "zerocopy_threshold": 0, 00:04:05.283 "tls_version": 0, 00:04:05.283 "enable_ktls": false 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "sock_impl_set_options", 00:04:05.283 "params": { 00:04:05.283 "impl_name": "posix", 00:04:05.283 "recv_buf_size": 2097152, 00:04:05.283 "send_buf_size": 2097152, 00:04:05.283 "enable_recv_pipe": true, 00:04:05.283 "enable_quickack": false, 00:04:05.283 "enable_placement_id": 0, 00:04:05.283 "enable_zerocopy_send_server": true, 00:04:05.283 "enable_zerocopy_send_client": false, 00:04:05.283 "zerocopy_threshold": 0, 00:04:05.283 "tls_version": 0, 00:04:05.283 "enable_ktls": false 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "sock_impl_set_options", 00:04:05.283 "params": { 00:04:05.283 "impl_name": "uring", 00:04:05.283 "recv_buf_size": 2097152, 00:04:05.283 "send_buf_size": 2097152, 00:04:05.283 "enable_recv_pipe": true, 00:04:05.283 "enable_quickack": false, 00:04:05.283 "enable_placement_id": 0, 00:04:05.283 "enable_zerocopy_send_server": false, 00:04:05.283 "enable_zerocopy_send_client": false, 00:04:05.283 "zerocopy_threshold": 0, 00:04:05.283 "tls_version": 0, 00:04:05.283 "enable_ktls": false 00:04:05.283 } 00:04:05.283 } 00:04:05.283 ] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "vmd", 00:04:05.283 "config": [] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "accel", 00:04:05.283 "config": [ 00:04:05.283 { 00:04:05.283 "method": "accel_set_options", 00:04:05.283 "params": { 00:04:05.283 "small_cache_size": 128, 00:04:05.283 "large_cache_size": 16, 00:04:05.283 "task_count": 2048, 00:04:05.283 "sequence_count": 2048, 00:04:05.283 "buf_count": 2048 00:04:05.283 } 00:04:05.283 } 00:04:05.283 ] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "bdev", 00:04:05.283 "config": [ 00:04:05.283 { 00:04:05.283 "method": "bdev_set_options", 00:04:05.283 "params": { 00:04:05.283 "bdev_io_pool_size": 65535, 00:04:05.283 "bdev_io_cache_size": 256, 00:04:05.283 "bdev_auto_examine": true, 00:04:05.283 "iobuf_small_cache_size": 128, 00:04:05.283 "iobuf_large_cache_size": 16 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "bdev_raid_set_options", 00:04:05.283 "params": { 00:04:05.283 "process_window_size_kb": 1024 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "bdev_iscsi_set_options", 00:04:05.283 "params": { 00:04:05.283 "timeout_sec": 30 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "bdev_nvme_set_options", 00:04:05.283 "params": { 00:04:05.283 "action_on_timeout": "none", 00:04:05.283 "timeout_us": 0, 00:04:05.283 "timeout_admin_us": 0, 00:04:05.283 "keep_alive_timeout_ms": 10000, 00:04:05.283 "arbitration_burst": 0, 00:04:05.283 "low_priority_weight": 0, 00:04:05.283 "medium_priority_weight": 0, 00:04:05.283 "high_priority_weight": 0, 00:04:05.283 "nvme_adminq_poll_period_us": 10000, 00:04:05.283 "nvme_ioq_poll_period_us": 0, 00:04:05.283 "io_queue_requests": 0, 00:04:05.283 "delay_cmd_submit": true, 00:04:05.283 "transport_retry_count": 4, 00:04:05.283 "bdev_retry_count": 3, 00:04:05.283 "transport_ack_timeout": 0, 00:04:05.283 "ctrlr_loss_timeout_sec": 0, 00:04:05.283 "reconnect_delay_sec": 0, 00:04:05.283 "fast_io_fail_timeout_sec": 0, 00:04:05.283 "disable_auto_failback": false, 00:04:05.283 "generate_uuids": false, 00:04:05.283 "transport_tos": 0, 00:04:05.283 "nvme_error_stat": false, 00:04:05.283 "rdma_srq_size": 0, 00:04:05.283 "io_path_stat": false, 00:04:05.283 "allow_accel_sequence": false, 00:04:05.283 "rdma_max_cq_size": 0, 00:04:05.283 "rdma_cm_event_timeout_ms": 0, 00:04:05.283 "dhchap_digests": [ 00:04:05.283 "sha256", 00:04:05.283 "sha384", 00:04:05.283 "sha512" 00:04:05.283 ], 00:04:05.283 "dhchap_dhgroups": [ 00:04:05.283 "null", 00:04:05.283 "ffdhe2048", 00:04:05.283 "ffdhe3072", 00:04:05.283 "ffdhe4096", 00:04:05.283 "ffdhe6144", 00:04:05.283 "ffdhe8192" 00:04:05.283 ] 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "bdev_nvme_set_hotplug", 00:04:05.283 "params": { 00:04:05.283 "period_us": 100000, 00:04:05.283 "enable": false 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "bdev_wait_for_examine" 00:04:05.283 } 00:04:05.283 ] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "scsi", 00:04:05.283 "config": null 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "scheduler", 00:04:05.283 "config": [ 00:04:05.283 { 00:04:05.283 "method": "framework_set_scheduler", 00:04:05.283 "params": { 00:04:05.283 "name": "static" 00:04:05.283 } 00:04:05.283 } 00:04:05.283 ] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "vhost_scsi", 00:04:05.283 "config": [] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "vhost_blk", 00:04:05.283 "config": [] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "ublk", 00:04:05.283 "config": [] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "nbd", 00:04:05.283 "config": [] 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "subsystem": "nvmf", 00:04:05.283 "config": [ 00:04:05.283 { 00:04:05.283 "method": "nvmf_set_config", 00:04:05.283 "params": { 00:04:05.283 "discovery_filter": "match_any", 00:04:05.283 "admin_cmd_passthru": { 00:04:05.283 "identify_ctrlr": false 00:04:05.283 } 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "nvmf_set_max_subsystems", 00:04:05.283 "params": { 00:04:05.283 "max_subsystems": 1024 00:04:05.283 } 00:04:05.283 }, 00:04:05.283 { 00:04:05.283 "method": "nvmf_set_crdt", 00:04:05.283 "params": { 00:04:05.283 "crdt1": 0, 00:04:05.283 "crdt2": 0, 00:04:05.283 "crdt3": 0 00:04:05.283 } 00:04:05.283 }, 00:04:05.284 { 00:04:05.284 "method": "nvmf_create_transport", 00:04:05.284 "params": { 00:04:05.284 "trtype": "TCP", 00:04:05.284 "max_queue_depth": 128, 00:04:05.284 "max_io_qpairs_per_ctrlr": 127, 00:04:05.284 "in_capsule_data_size": 4096, 00:04:05.284 "max_io_size": 131072, 00:04:05.284 "io_unit_size": 131072, 00:04:05.284 "max_aq_depth": 128, 00:04:05.284 "num_shared_buffers": 511, 00:04:05.284 "buf_cache_size": 4294967295, 00:04:05.284 "dif_insert_or_strip": false, 00:04:05.284 "zcopy": false, 00:04:05.284 "c2h_success": true, 00:04:05.284 "sock_priority": 0, 00:04:05.284 "abort_timeout_sec": 1, 00:04:05.284 "ack_timeout": 0, 00:04:05.284 "data_wr_pool_size": 0 00:04:05.284 } 00:04:05.284 } 00:04:05.284 ] 00:04:05.284 }, 00:04:05.284 { 00:04:05.284 "subsystem": "iscsi", 00:04:05.284 "config": [ 00:04:05.284 { 00:04:05.284 "method": "iscsi_set_options", 00:04:05.284 "params": { 00:04:05.284 "node_base": "iqn.2016-06.io.spdk", 00:04:05.284 "max_sessions": 128, 00:04:05.284 "max_connections_per_session": 2, 00:04:05.284 "max_queue_depth": 64, 00:04:05.284 "default_time2wait": 2, 00:04:05.284 "default_time2retain": 20, 00:04:05.284 "first_burst_length": 8192, 00:04:05.284 "immediate_data": true, 00:04:05.284 "allow_duplicated_isid": false, 00:04:05.284 "error_recovery_level": 0, 00:04:05.284 "nop_timeout": 60, 00:04:05.284 "nop_in_interval": 30, 00:04:05.284 "disable_chap": false, 00:04:05.284 "require_chap": false, 00:04:05.284 "mutual_chap": false, 00:04:05.284 "chap_group": 0, 00:04:05.284 "max_large_datain_per_connection": 64, 00:04:05.284 "max_r2t_per_connection": 4, 00:04:05.284 "pdu_pool_size": 36864, 00:04:05.284 "immediate_data_pool_size": 16384, 00:04:05.284 "data_out_pool_size": 2048 00:04:05.284 } 00:04:05.284 } 00:04:05.284 ] 00:04:05.284 } 00:04:05.284 ] 00:04:05.284 } 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59116 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59116 ']' 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59116 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59116 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.284 killing process with pid 59116 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59116' 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59116 00:04:05.284 09:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59116 00:04:05.849 09:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59149 00:04:05.849 09:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:05.849 09:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59149 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59149 ']' 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59149 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59149 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.115 killing process with pid 59149 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59149' 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59149 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59149 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.115 00:04:11.115 real 0m7.137s 00:04:11.115 user 0m6.942s 00:04:11.115 sys 0m0.630s 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.115 ************************************ 00:04:11.115 END TEST skip_rpc_with_json 00:04:11.115 ************************************ 00:04:11.115 09:30:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.115 09:30:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:11.115 09:30:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.115 09:30:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.115 09:30:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.115 ************************************ 00:04:11.115 START TEST skip_rpc_with_delay 00:04:11.115 ************************************ 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:11.115 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.373 [2024-07-15 09:30:05.583575] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:11.373 [2024-07-15 09:30:05.583768] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:11.373 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:11.373 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:11.373 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:11.373 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:11.373 00:04:11.373 real 0m0.095s 00:04:11.373 user 0m0.055s 00:04:11.373 sys 0m0.037s 00:04:11.373 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.373 09:30:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:11.373 ************************************ 00:04:11.373 END TEST skip_rpc_with_delay 00:04:11.373 ************************************ 00:04:11.373 09:30:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.373 09:30:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:11.373 09:30:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:11.373 09:30:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:11.373 09:30:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.373 09:30:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.373 09:30:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.373 ************************************ 00:04:11.373 START TEST exit_on_failed_rpc_init 00:04:11.373 ************************************ 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59253 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59253 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59253 ']' 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.373 09:30:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.374 09:30:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.374 09:30:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.374 [2024-07-15 09:30:05.734674] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:11.374 [2024-07-15 09:30:05.734781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:04:11.633 [2024-07-15 09:30:05.871547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.633 [2024-07-15 09:30:06.003758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.633 [2024-07-15 09:30:06.062214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:12.572 09:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:12.572 [2024-07-15 09:30:06.814695] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:12.572 [2024-07-15 09:30:06.814812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:04:12.572 [2024-07-15 09:30:06.959716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.830 [2024-07-15 09:30:07.087891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.830 [2024-07-15 09:30:07.088000] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:12.830 [2024-07-15 09:30:07.088018] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:12.830 [2024-07-15 09:30:07.088029] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59253 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59253 ']' 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59253 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59253 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.830 killing process with pid 59253 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.830 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59253' 00:04:12.831 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59253 00:04:12.831 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59253 00:04:13.397 00:04:13.397 real 0m1.964s 00:04:13.397 user 0m2.341s 00:04:13.397 sys 0m0.446s 00:04:13.397 ************************************ 00:04:13.397 END TEST exit_on_failed_rpc_init 00:04:13.397 ************************************ 00:04:13.397 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.397 09:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.397 09:30:07 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:13.397 09:30:07 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:13.397 00:04:13.397 real 0m14.915s 00:04:13.397 user 0m14.495s 00:04:13.397 sys 0m1.563s 00:04:13.397 09:30:07 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.397 09:30:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.397 ************************************ 00:04:13.397 END TEST skip_rpc 00:04:13.397 ************************************ 00:04:13.397 09:30:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.397 09:30:07 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.397 09:30:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.397 09:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.397 09:30:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.397 ************************************ 00:04:13.397 START TEST rpc_client 00:04:13.397 ************************************ 00:04:13.397 09:30:07 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:13.397 * Looking for test storage... 00:04:13.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:13.397 09:30:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:13.397 OK 00:04:13.397 09:30:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:13.397 00:04:13.397 real 0m0.104s 00:04:13.397 user 0m0.041s 00:04:13.397 sys 0m0.071s 00:04:13.397 09:30:07 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.397 ************************************ 00:04:13.397 END TEST rpc_client 00:04:13.397 ************************************ 00:04:13.397 09:30:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:13.656 09:30:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.656 09:30:07 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.656 09:30:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.656 09:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.656 09:30:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.656 ************************************ 00:04:13.656 START TEST json_config 00:04:13.656 ************************************ 00:04:13.656 09:30:07 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.656 09:30:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.656 09:30:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.656 09:30:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.656 09:30:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.656 09:30:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.656 09:30:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.656 09:30:07 json_config -- paths/export.sh@5 -- # export PATH 00:04:13.656 09:30:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@47 -- # : 0 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:13.656 09:30:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.656 INFO: JSON configuration test init 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:13.656 09:30:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.656 09:30:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:13.656 09:30:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.656 09:30:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.656 09:30:07 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:13.656 09:30:07 json_config -- json_config/common.sh@9 -- # local app=target 00:04:13.656 09:30:07 json_config -- json_config/common.sh@10 -- # shift 00:04:13.656 09:30:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.656 09:30:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.656 09:30:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.656 09:30:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.656 09:30:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.656 09:30:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59389 00:04:13.656 Waiting for target to run... 00:04:13.657 09:30:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.657 09:30:07 json_config -- json_config/common.sh@25 -- # waitforlisten 59389 /var/tmp/spdk_tgt.sock 00:04:13.657 09:30:07 json_config -- common/autotest_common.sh@829 -- # '[' -z 59389 ']' 00:04:13.657 09:30:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:13.657 09:30:07 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.657 09:30:07 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.657 09:30:07 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.657 09:30:07 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.657 09:30:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.657 [2024-07-15 09:30:08.040452] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:13.657 [2024-07-15 09:30:08.040548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59389 ] 00:04:14.224 [2024-07-15 09:30:08.476797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.224 [2024-07-15 09:30:08.578919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.788 09:30:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.788 09:30:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:14.788 00:04:14.788 09:30:09 json_config -- json_config/common.sh@26 -- # echo '' 00:04:14.788 09:30:09 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:14.788 09:30:09 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:14.788 09:30:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.788 09:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.788 09:30:09 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:14.788 09:30:09 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:14.788 09:30:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.788 09:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.788 09:30:09 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:14.788 09:30:09 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:14.788 09:30:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:15.089 [2024-07-15 09:30:09.379335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.346 09:30:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:15.346 09:30:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:15.346 09:30:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.346 09:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.346 09:30:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:15.346 09:30:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:15.346 09:30:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:15.346 09:30:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:15.346 09:30:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:15.346 09:30:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:15.604 09:30:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:15.604 09:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:15.604 09:30:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:15.604 09:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:15.604 09:30:09 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.604 09:30:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:15.863 MallocForNvmf0 00:04:15.863 09:30:10 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:15.863 09:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:16.121 MallocForNvmf1 00:04:16.121 09:30:10 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.121 09:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:16.379 [2024-07-15 09:30:10.651368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.379 09:30:10 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.379 09:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:16.637 09:30:10 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.637 09:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:16.895 09:30:11 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:16.895 09:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:17.153 09:30:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.153 09:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:17.412 [2024-07-15 09:30:11.719969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:17.412 09:30:11 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:17.412 09:30:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.412 09:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.412 09:30:11 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:17.412 09:30:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.412 09:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.412 09:30:11 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:17.412 09:30:11 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.412 09:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:17.670 MallocBdevForConfigChangeCheck 00:04:17.670 09:30:12 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:17.670 09:30:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.670 09:30:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.670 09:30:12 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:17.670 09:30:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.235 INFO: shutting down applications... 00:04:18.235 09:30:12 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:18.235 09:30:12 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:18.235 09:30:12 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:18.235 09:30:12 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:18.235 09:30:12 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:18.493 Calling clear_iscsi_subsystem 00:04:18.493 Calling clear_nvmf_subsystem 00:04:18.493 Calling clear_nbd_subsystem 00:04:18.493 Calling clear_ublk_subsystem 00:04:18.493 Calling clear_vhost_blk_subsystem 00:04:18.493 Calling clear_vhost_scsi_subsystem 00:04:18.493 Calling clear_bdev_subsystem 00:04:18.493 09:30:12 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:18.493 09:30:12 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:18.493 09:30:12 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:18.493 09:30:12 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.493 09:30:12 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:18.493 09:30:12 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:19.057 09:30:13 json_config -- json_config/json_config.sh@345 -- # break 00:04:19.057 09:30:13 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:19.057 09:30:13 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:19.057 09:30:13 json_config -- json_config/common.sh@31 -- # local app=target 00:04:19.057 09:30:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.057 09:30:13 json_config -- json_config/common.sh@35 -- # [[ -n 59389 ]] 00:04:19.057 09:30:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59389 00:04:19.057 09:30:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.057 09:30:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.057 09:30:13 json_config -- json_config/common.sh@41 -- # kill -0 59389 00:04:19.057 09:30:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.315 09:30:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.315 09:30:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.315 09:30:13 json_config -- json_config/common.sh@41 -- # kill -0 59389 00:04:19.598 09:30:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:19.598 09:30:13 json_config -- json_config/common.sh@43 -- # break 00:04:19.598 09:30:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:19.598 SPDK target shutdown done 00:04:19.598 09:30:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:19.598 INFO: relaunching applications... 00:04:19.598 09:30:13 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:19.598 09:30:13 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.598 09:30:13 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.598 09:30:13 json_config -- json_config/common.sh@10 -- # shift 00:04:19.598 09:30:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.598 09:30:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.598 09:30:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.598 09:30:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.598 09:30:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.598 09:30:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:19.598 09:30:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59585 00:04:19.598 Waiting for target to run... 00:04:19.598 09:30:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.598 09:30:13 json_config -- json_config/common.sh@25 -- # waitforlisten 59585 /var/tmp/spdk_tgt.sock 00:04:19.598 09:30:13 json_config -- common/autotest_common.sh@829 -- # '[' -z 59585 ']' 00:04:19.598 09:30:13 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.598 09:30:13 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.598 09:30:13 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.598 09:30:13 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.598 09:30:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.598 [2024-07-15 09:30:13.844182] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:19.598 [2024-07-15 09:30:13.844747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59585 ] 00:04:19.861 [2024-07-15 09:30:14.249226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.120 [2024-07-15 09:30:14.341341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.120 [2024-07-15 09:30:14.467510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:20.378 [2024-07-15 09:30:14.678626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.378 [2024-07-15 09:30:14.710703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:20.637 09:30:14 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.637 09:30:14 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:20.637 00:04:20.637 09:30:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:20.637 09:30:14 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:20.637 INFO: Checking if target configuration is the same... 00:04:20.637 09:30:14 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:20.637 09:30:14 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.637 09:30:14 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:20.637 09:30:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.637 + '[' 2 -ne 2 ']' 00:04:20.637 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:20.637 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:20.637 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:20.637 +++ basename /dev/fd/62 00:04:20.637 ++ mktemp /tmp/62.XXX 00:04:20.637 + tmp_file_1=/tmp/62.hly 00:04:20.637 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:20.637 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.637 + tmp_file_2=/tmp/spdk_tgt_config.json.QEq 00:04:20.637 + ret=0 00:04:20.637 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:20.896 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:20.896 + diff -u /tmp/62.hly /tmp/spdk_tgt_config.json.QEq 00:04:20.896 INFO: JSON config files are the same 00:04:20.896 + echo 'INFO: JSON config files are the same' 00:04:20.896 + rm /tmp/62.hly /tmp/spdk_tgt_config.json.QEq 00:04:20.896 + exit 0 00:04:20.896 09:30:15 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:20.896 INFO: changing configuration and checking if this can be detected... 00:04:20.896 09:30:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:20.896 09:30:15 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.896 09:30:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:21.155 09:30:15 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:21.155 09:30:15 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.155 09:30:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.155 + '[' 2 -ne 2 ']' 00:04:21.155 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:21.155 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:21.155 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:21.155 +++ basename /dev/fd/62 00:04:21.155 ++ mktemp /tmp/62.XXX 00:04:21.155 + tmp_file_1=/tmp/62.U8s 00:04:21.155 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.155 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:21.155 + tmp_file_2=/tmp/spdk_tgt_config.json.Hzm 00:04:21.155 + ret=0 00:04:21.155 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.722 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:21.722 + diff -u /tmp/62.U8s /tmp/spdk_tgt_config.json.Hzm 00:04:21.722 + ret=1 00:04:21.722 + echo '=== Start of file: /tmp/62.U8s ===' 00:04:21.722 + cat /tmp/62.U8s 00:04:21.722 + echo '=== End of file: /tmp/62.U8s ===' 00:04:21.722 + echo '' 00:04:21.722 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Hzm ===' 00:04:21.722 + cat /tmp/spdk_tgt_config.json.Hzm 00:04:21.722 + echo '=== End of file: /tmp/spdk_tgt_config.json.Hzm ===' 00:04:21.722 + echo '' 00:04:21.722 + rm /tmp/62.U8s /tmp/spdk_tgt_config.json.Hzm 00:04:21.722 + exit 1 00:04:21.722 INFO: configuration change detected. 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@317 -- # [[ -n 59585 ]] 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.722 09:30:16 json_config -- json_config/json_config.sh@323 -- # killprocess 59585 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@948 -- # '[' -z 59585 ']' 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@952 -- # kill -0 59585 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@953 -- # uname 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59585 00:04:21.722 killing process with pid 59585 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59585' 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@967 -- # kill 59585 00:04:21.722 09:30:16 json_config -- common/autotest_common.sh@972 -- # wait 59585 00:04:21.980 09:30:16 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:21.980 09:30:16 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:21.980 09:30:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:21.980 09:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.980 09:30:16 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:21.980 INFO: Success 00:04:21.980 09:30:16 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:21.980 ************************************ 00:04:21.980 END TEST json_config 00:04:21.980 ************************************ 00:04:21.980 00:04:21.980 real 0m8.531s 00:04:21.980 user 0m12.233s 00:04:21.980 sys 0m1.749s 00:04:21.980 09:30:16 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.980 09:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.240 09:30:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:22.240 09:30:16 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.240 09:30:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.240 09:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.240 09:30:16 -- common/autotest_common.sh@10 -- # set +x 00:04:22.240 ************************************ 00:04:22.240 START TEST json_config_extra_key 00:04:22.240 ************************************ 00:04:22.240 09:30:16 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:22.240 09:30:16 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.240 09:30:16 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.240 09:30:16 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.240 09:30:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.240 09:30:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.240 09:30:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.240 09:30:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:22.240 09:30:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:22.240 09:30:16 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.240 INFO: launching applications... 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:22.240 09:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.240 09:30:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59731 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.241 Waiting for target to run... 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59731 /var/tmp/spdk_tgt.sock 00:04:22.241 09:30:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:22.241 09:30:16 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59731 ']' 00:04:22.241 09:30:16 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.241 09:30:16 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.241 09:30:16 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.241 09:30:16 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.241 09:30:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.241 [2024-07-15 09:30:16.603296] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:22.241 [2024-07-15 09:30:16.603392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:04:22.807 [2024-07-15 09:30:17.018383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.807 [2024-07-15 09:30:17.110009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.807 [2024-07-15 09:30:17.131142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.373 09:30:17 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.373 09:30:17 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:23.373 00:04:23.373 INFO: shutting down applications... 00:04:23.373 09:30:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.373 09:30:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59731 ]] 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59731 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59731 00:04:23.373 09:30:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.631 09:30:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.631 09:30:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.631 09:30:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59731 00:04:23.631 09:30:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.631 09:30:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:23.631 09:30:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.631 09:30:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.631 SPDK target shutdown done 00:04:23.631 Success 00:04:23.631 09:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:23.631 00:04:23.631 real 0m1.587s 00:04:23.631 user 0m1.443s 00:04:23.631 sys 0m0.437s 00:04:23.631 09:30:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.631 09:30:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.631 ************************************ 00:04:23.632 END TEST json_config_extra_key 00:04:23.632 ************************************ 00:04:23.632 09:30:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.632 09:30:18 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.632 09:30:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.632 09:30:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.632 09:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:23.890 ************************************ 00:04:23.890 START TEST alias_rpc 00:04:23.890 ************************************ 00:04:23.890 09:30:18 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.890 * Looking for test storage... 00:04:23.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:23.890 09:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:23.890 09:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59790 00:04:23.890 09:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59790 00:04:23.890 09:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.890 09:30:18 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59790 ']' 00:04:23.890 09:30:18 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.890 09:30:18 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:23.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.890 09:30:18 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.890 09:30:18 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:23.890 09:30:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.890 [2024-07-15 09:30:18.260315] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:23.890 [2024-07-15 09:30:18.260418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59790 ] 00:04:24.149 [2024-07-15 09:30:18.401007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.149 [2024-07-15 09:30:18.530138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.149 [2024-07-15 09:30:18.589146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:25.085 09:30:19 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.085 09:30:19 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:25.085 09:30:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:25.085 09:30:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59790 00:04:25.085 09:30:19 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59790 ']' 00:04:25.085 09:30:19 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59790 00:04:25.085 09:30:19 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:25.085 09:30:19 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:25.085 09:30:19 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59790 00:04:25.343 09:30:19 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:25.343 09:30:19 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:25.343 killing process with pid 59790 00:04:25.343 09:30:19 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59790' 00:04:25.343 09:30:19 alias_rpc -- common/autotest_common.sh@967 -- # kill 59790 00:04:25.343 09:30:19 alias_rpc -- common/autotest_common.sh@972 -- # wait 59790 00:04:25.601 ************************************ 00:04:25.602 END TEST alias_rpc 00:04:25.602 ************************************ 00:04:25.602 00:04:25.602 real 0m1.839s 00:04:25.602 user 0m2.087s 00:04:25.602 sys 0m0.449s 00:04:25.602 09:30:19 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.602 09:30:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.602 09:30:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.602 09:30:19 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:25.602 09:30:19 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:25.602 09:30:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.602 09:30:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.602 09:30:19 -- common/autotest_common.sh@10 -- # set +x 00:04:25.602 ************************************ 00:04:25.602 START TEST spdkcli_tcp 00:04:25.602 ************************************ 00:04:25.602 09:30:19 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:25.602 * Looking for test storage... 00:04:25.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:25.602 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59866 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59866 00:04:25.860 09:30:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59866 ']' 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.860 09:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.860 [2024-07-15 09:30:20.130778] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:25.860 [2024-07-15 09:30:20.130868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59866 ] 00:04:25.860 [2024-07-15 09:30:20.266471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.118 [2024-07-15 09:30:20.385268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.118 [2024-07-15 09:30:20.385278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.118 [2024-07-15 09:30:20.440455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:26.685 09:30:21 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.685 09:30:21 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:26.685 09:30:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59883 00:04:26.685 09:30:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:26.685 09:30:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:26.944 [ 00:04:26.944 "bdev_malloc_delete", 00:04:26.944 "bdev_malloc_create", 00:04:26.944 "bdev_null_resize", 00:04:26.944 "bdev_null_delete", 00:04:26.944 "bdev_null_create", 00:04:26.944 "bdev_nvme_cuse_unregister", 00:04:26.944 "bdev_nvme_cuse_register", 00:04:26.944 "bdev_opal_new_user", 00:04:26.944 "bdev_opal_set_lock_state", 00:04:26.944 "bdev_opal_delete", 00:04:26.944 "bdev_opal_get_info", 00:04:26.944 "bdev_opal_create", 00:04:26.944 "bdev_nvme_opal_revert", 00:04:26.944 "bdev_nvme_opal_init", 00:04:26.944 "bdev_nvme_send_cmd", 00:04:26.944 "bdev_nvme_get_path_iostat", 00:04:26.944 "bdev_nvme_get_mdns_discovery_info", 00:04:26.944 "bdev_nvme_stop_mdns_discovery", 00:04:26.944 "bdev_nvme_start_mdns_discovery", 00:04:26.944 "bdev_nvme_set_multipath_policy", 00:04:26.944 "bdev_nvme_set_preferred_path", 00:04:26.944 "bdev_nvme_get_io_paths", 00:04:26.944 "bdev_nvme_remove_error_injection", 00:04:26.944 "bdev_nvme_add_error_injection", 00:04:26.944 "bdev_nvme_get_discovery_info", 00:04:26.944 "bdev_nvme_stop_discovery", 00:04:26.944 "bdev_nvme_start_discovery", 00:04:26.944 "bdev_nvme_get_controller_health_info", 00:04:26.944 "bdev_nvme_disable_controller", 00:04:26.944 "bdev_nvme_enable_controller", 00:04:26.944 "bdev_nvme_reset_controller", 00:04:26.944 "bdev_nvme_get_transport_statistics", 00:04:26.944 "bdev_nvme_apply_firmware", 00:04:26.944 "bdev_nvme_detach_controller", 00:04:26.944 "bdev_nvme_get_controllers", 00:04:26.945 "bdev_nvme_attach_controller", 00:04:26.945 "bdev_nvme_set_hotplug", 00:04:26.945 "bdev_nvme_set_options", 00:04:26.945 "bdev_passthru_delete", 00:04:26.945 "bdev_passthru_create", 00:04:26.945 "bdev_lvol_set_parent_bdev", 00:04:26.945 "bdev_lvol_set_parent", 00:04:26.945 "bdev_lvol_check_shallow_copy", 00:04:26.945 "bdev_lvol_start_shallow_copy", 00:04:26.945 "bdev_lvol_grow_lvstore", 00:04:26.945 "bdev_lvol_get_lvols", 00:04:26.945 "bdev_lvol_get_lvstores", 00:04:26.945 "bdev_lvol_delete", 00:04:26.945 "bdev_lvol_set_read_only", 00:04:26.945 "bdev_lvol_resize", 00:04:26.945 "bdev_lvol_decouple_parent", 00:04:26.945 "bdev_lvol_inflate", 00:04:26.945 "bdev_lvol_rename", 00:04:26.945 "bdev_lvol_clone_bdev", 00:04:26.945 "bdev_lvol_clone", 00:04:26.945 "bdev_lvol_snapshot", 00:04:26.945 "bdev_lvol_create", 00:04:26.945 "bdev_lvol_delete_lvstore", 00:04:26.945 "bdev_lvol_rename_lvstore", 00:04:26.945 "bdev_lvol_create_lvstore", 00:04:26.945 "bdev_raid_set_options", 00:04:26.945 "bdev_raid_remove_base_bdev", 00:04:26.945 "bdev_raid_add_base_bdev", 00:04:26.945 "bdev_raid_delete", 00:04:26.945 "bdev_raid_create", 00:04:26.945 "bdev_raid_get_bdevs", 00:04:26.945 "bdev_error_inject_error", 00:04:26.945 "bdev_error_delete", 00:04:26.945 "bdev_error_create", 00:04:26.945 "bdev_split_delete", 00:04:26.945 "bdev_split_create", 00:04:26.945 "bdev_delay_delete", 00:04:26.945 "bdev_delay_create", 00:04:26.945 "bdev_delay_update_latency", 00:04:26.945 "bdev_zone_block_delete", 00:04:26.945 "bdev_zone_block_create", 00:04:26.945 "blobfs_create", 00:04:26.945 "blobfs_detect", 00:04:26.945 "blobfs_set_cache_size", 00:04:26.945 "bdev_aio_delete", 00:04:26.945 "bdev_aio_rescan", 00:04:26.945 "bdev_aio_create", 00:04:26.945 "bdev_ftl_set_property", 00:04:26.945 "bdev_ftl_get_properties", 00:04:26.945 "bdev_ftl_get_stats", 00:04:26.945 "bdev_ftl_unmap", 00:04:26.945 "bdev_ftl_unload", 00:04:26.945 "bdev_ftl_delete", 00:04:26.945 "bdev_ftl_load", 00:04:26.945 "bdev_ftl_create", 00:04:26.945 "bdev_virtio_attach_controller", 00:04:26.945 "bdev_virtio_scsi_get_devices", 00:04:26.945 "bdev_virtio_detach_controller", 00:04:26.945 "bdev_virtio_blk_set_hotplug", 00:04:26.945 "bdev_iscsi_delete", 00:04:26.945 "bdev_iscsi_create", 00:04:26.945 "bdev_iscsi_set_options", 00:04:26.945 "bdev_uring_delete", 00:04:26.945 "bdev_uring_rescan", 00:04:26.945 "bdev_uring_create", 00:04:26.945 "accel_error_inject_error", 00:04:26.945 "ioat_scan_accel_module", 00:04:26.945 "dsa_scan_accel_module", 00:04:26.945 "iaa_scan_accel_module", 00:04:26.945 "keyring_file_remove_key", 00:04:26.945 "keyring_file_add_key", 00:04:26.945 "keyring_linux_set_options", 00:04:26.945 "iscsi_get_histogram", 00:04:26.945 "iscsi_enable_histogram", 00:04:26.945 "iscsi_set_options", 00:04:26.945 "iscsi_get_auth_groups", 00:04:26.945 "iscsi_auth_group_remove_secret", 00:04:26.945 "iscsi_auth_group_add_secret", 00:04:26.945 "iscsi_delete_auth_group", 00:04:26.945 "iscsi_create_auth_group", 00:04:26.945 "iscsi_set_discovery_auth", 00:04:26.945 "iscsi_get_options", 00:04:26.945 "iscsi_target_node_request_logout", 00:04:26.945 "iscsi_target_node_set_redirect", 00:04:26.945 "iscsi_target_node_set_auth", 00:04:26.945 "iscsi_target_node_add_lun", 00:04:26.945 "iscsi_get_stats", 00:04:26.945 "iscsi_get_connections", 00:04:26.945 "iscsi_portal_group_set_auth", 00:04:26.945 "iscsi_start_portal_group", 00:04:26.945 "iscsi_delete_portal_group", 00:04:26.945 "iscsi_create_portal_group", 00:04:26.945 "iscsi_get_portal_groups", 00:04:26.945 "iscsi_delete_target_node", 00:04:26.945 "iscsi_target_node_remove_pg_ig_maps", 00:04:26.945 "iscsi_target_node_add_pg_ig_maps", 00:04:26.945 "iscsi_create_target_node", 00:04:26.945 "iscsi_get_target_nodes", 00:04:26.945 "iscsi_delete_initiator_group", 00:04:26.945 "iscsi_initiator_group_remove_initiators", 00:04:26.945 "iscsi_initiator_group_add_initiators", 00:04:26.945 "iscsi_create_initiator_group", 00:04:26.945 "iscsi_get_initiator_groups", 00:04:26.945 "nvmf_set_crdt", 00:04:26.945 "nvmf_set_config", 00:04:26.945 "nvmf_set_max_subsystems", 00:04:26.945 "nvmf_stop_mdns_prr", 00:04:26.945 "nvmf_publish_mdns_prr", 00:04:26.945 "nvmf_subsystem_get_listeners", 00:04:26.945 "nvmf_subsystem_get_qpairs", 00:04:26.945 "nvmf_subsystem_get_controllers", 00:04:26.945 "nvmf_get_stats", 00:04:26.945 "nvmf_get_transports", 00:04:26.945 "nvmf_create_transport", 00:04:26.945 "nvmf_get_targets", 00:04:26.945 "nvmf_delete_target", 00:04:26.945 "nvmf_create_target", 00:04:26.945 "nvmf_subsystem_allow_any_host", 00:04:26.945 "nvmf_subsystem_remove_host", 00:04:26.945 "nvmf_subsystem_add_host", 00:04:26.945 "nvmf_ns_remove_host", 00:04:26.945 "nvmf_ns_add_host", 00:04:26.945 "nvmf_subsystem_remove_ns", 00:04:26.945 "nvmf_subsystem_add_ns", 00:04:26.945 "nvmf_subsystem_listener_set_ana_state", 00:04:26.945 "nvmf_discovery_get_referrals", 00:04:26.945 "nvmf_discovery_remove_referral", 00:04:26.945 "nvmf_discovery_add_referral", 00:04:26.945 "nvmf_subsystem_remove_listener", 00:04:26.945 "nvmf_subsystem_add_listener", 00:04:26.945 "nvmf_delete_subsystem", 00:04:26.945 "nvmf_create_subsystem", 00:04:26.945 "nvmf_get_subsystems", 00:04:26.945 "env_dpdk_get_mem_stats", 00:04:26.945 "nbd_get_disks", 00:04:26.945 "nbd_stop_disk", 00:04:26.945 "nbd_start_disk", 00:04:26.945 "ublk_recover_disk", 00:04:26.945 "ublk_get_disks", 00:04:26.945 "ublk_stop_disk", 00:04:26.945 "ublk_start_disk", 00:04:26.945 "ublk_destroy_target", 00:04:26.945 "ublk_create_target", 00:04:26.945 "virtio_blk_create_transport", 00:04:26.945 "virtio_blk_get_transports", 00:04:26.945 "vhost_controller_set_coalescing", 00:04:26.945 "vhost_get_controllers", 00:04:26.945 "vhost_delete_controller", 00:04:26.945 "vhost_create_blk_controller", 00:04:26.945 "vhost_scsi_controller_remove_target", 00:04:26.945 "vhost_scsi_controller_add_target", 00:04:26.945 "vhost_start_scsi_controller", 00:04:26.945 "vhost_create_scsi_controller", 00:04:26.945 "thread_set_cpumask", 00:04:26.945 "framework_get_governor", 00:04:26.945 "framework_get_scheduler", 00:04:26.945 "framework_set_scheduler", 00:04:26.945 "framework_get_reactors", 00:04:26.945 "thread_get_io_channels", 00:04:26.945 "thread_get_pollers", 00:04:26.945 "thread_get_stats", 00:04:26.945 "framework_monitor_context_switch", 00:04:26.945 "spdk_kill_instance", 00:04:26.945 "log_enable_timestamps", 00:04:26.945 "log_get_flags", 00:04:26.945 "log_clear_flag", 00:04:26.945 "log_set_flag", 00:04:26.945 "log_get_level", 00:04:26.945 "log_set_level", 00:04:26.945 "log_get_print_level", 00:04:26.945 "log_set_print_level", 00:04:26.945 "framework_enable_cpumask_locks", 00:04:26.945 "framework_disable_cpumask_locks", 00:04:26.945 "framework_wait_init", 00:04:26.945 "framework_start_init", 00:04:26.945 "scsi_get_devices", 00:04:26.945 "bdev_get_histogram", 00:04:26.945 "bdev_enable_histogram", 00:04:26.945 "bdev_set_qos_limit", 00:04:26.945 "bdev_set_qd_sampling_period", 00:04:26.945 "bdev_get_bdevs", 00:04:26.945 "bdev_reset_iostat", 00:04:26.945 "bdev_get_iostat", 00:04:26.945 "bdev_examine", 00:04:26.945 "bdev_wait_for_examine", 00:04:26.945 "bdev_set_options", 00:04:26.945 "notify_get_notifications", 00:04:26.945 "notify_get_types", 00:04:26.945 "accel_get_stats", 00:04:26.945 "accel_set_options", 00:04:26.946 "accel_set_driver", 00:04:26.946 "accel_crypto_key_destroy", 00:04:26.946 "accel_crypto_keys_get", 00:04:26.946 "accel_crypto_key_create", 00:04:26.946 "accel_assign_opc", 00:04:26.946 "accel_get_module_info", 00:04:26.946 "accel_get_opc_assignments", 00:04:26.946 "vmd_rescan", 00:04:26.946 "vmd_remove_device", 00:04:26.946 "vmd_enable", 00:04:26.946 "sock_get_default_impl", 00:04:26.946 "sock_set_default_impl", 00:04:26.946 "sock_impl_set_options", 00:04:26.946 "sock_impl_get_options", 00:04:26.946 "iobuf_get_stats", 00:04:26.946 "iobuf_set_options", 00:04:26.946 "framework_get_pci_devices", 00:04:26.946 "framework_get_config", 00:04:26.946 "framework_get_subsystems", 00:04:26.946 "trace_get_info", 00:04:26.946 "trace_get_tpoint_group_mask", 00:04:26.946 "trace_disable_tpoint_group", 00:04:26.946 "trace_enable_tpoint_group", 00:04:26.946 "trace_clear_tpoint_mask", 00:04:26.946 "trace_set_tpoint_mask", 00:04:26.946 "keyring_get_keys", 00:04:26.946 "spdk_get_version", 00:04:26.946 "rpc_get_methods" 00:04:26.946 ] 00:04:26.946 09:30:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:26.946 09:30:21 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.946 09:30:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.946 09:30:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:26.946 09:30:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59866 00:04:26.946 09:30:21 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59866 ']' 00:04:26.946 09:30:21 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59866 00:04:26.946 09:30:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:26.946 09:30:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.946 09:30:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59866 00:04:27.204 killing process with pid 59866 00:04:27.204 09:30:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.204 09:30:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.204 09:30:21 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59866' 00:04:27.204 09:30:21 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59866 00:04:27.204 09:30:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59866 00:04:27.467 ************************************ 00:04:27.467 END TEST spdkcli_tcp 00:04:27.467 ************************************ 00:04:27.467 00:04:27.467 real 0m1.828s 00:04:27.467 user 0m3.407s 00:04:27.467 sys 0m0.458s 00:04:27.467 09:30:21 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.467 09:30:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:27.467 09:30:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.467 09:30:21 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.467 09:30:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.467 09:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.467 09:30:21 -- common/autotest_common.sh@10 -- # set +x 00:04:27.467 ************************************ 00:04:27.467 START TEST dpdk_mem_utility 00:04:27.467 ************************************ 00:04:27.467 09:30:21 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.467 * Looking for test storage... 00:04:27.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:27.738 09:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:27.738 09:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59957 00:04:27.738 09:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59957 00:04:27.738 09:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:27.738 09:30:21 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 59957 ']' 00:04:27.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.738 09:30:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.738 09:30:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.738 09:30:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.738 09:30:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.738 09:30:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.738 [2024-07-15 09:30:22.024443] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:27.738 [2024-07-15 09:30:22.024577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:04:27.738 [2024-07-15 09:30:22.167845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.010 [2024-07-15 09:30:22.288467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.010 [2024-07-15 09:30:22.344113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:28.576 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.576 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:28.576 09:30:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.577 09:30:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.577 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.577 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:28.577 { 00:04:28.577 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.577 } 00:04:28.577 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.577 09:30:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:28.836 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:28.836 1 heaps totaling size 814.000000 MiB 00:04:28.836 size: 814.000000 MiB heap id: 0 00:04:28.836 end heaps---------- 00:04:28.836 8 mempools totaling size 598.116089 MiB 00:04:28.836 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.836 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.836 size: 84.521057 MiB name: bdev_io_59957 00:04:28.836 size: 51.011292 MiB name: evtpool_59957 00:04:28.836 size: 50.003479 MiB name: msgpool_59957 00:04:28.836 size: 21.763794 MiB name: PDU_Pool 00:04:28.836 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.836 size: 0.026123 MiB name: Session_Pool 00:04:28.836 end mempools------- 00:04:28.836 6 memzones totaling size 4.142822 MiB 00:04:28.836 size: 1.000366 MiB name: RG_ring_0_59957 00:04:28.836 size: 1.000366 MiB name: RG_ring_1_59957 00:04:28.836 size: 1.000366 MiB name: RG_ring_4_59957 00:04:28.836 size: 1.000366 MiB name: RG_ring_5_59957 00:04:28.836 size: 0.125366 MiB name: RG_ring_2_59957 00:04:28.836 size: 0.015991 MiB name: RG_ring_3_59957 00:04:28.836 end memzones------- 00:04:28.836 09:30:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.836 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:04:28.836 list of free elements. size: 12.472107 MiB 00:04:28.836 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:28.836 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:28.836 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:28.836 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:28.836 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:28.836 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:28.836 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:28.836 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:28.836 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:28.836 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:04:28.836 element at address: 0x20000b200000 with size: 0.489624 MiB 00:04:28.836 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:28.836 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:28.836 element at address: 0x200027e00000 with size: 0.395935 MiB 00:04:28.836 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:28.836 list of standard malloc elements. size: 199.265320 MiB 00:04:28.836 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:28.836 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:28.836 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:28.836 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:28.836 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:28.836 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:28.836 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:28.836 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:28.836 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:28.836 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:28.836 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:28.836 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:28.836 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:28.836 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:28.836 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:28.836 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:28.837 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:28.837 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:28.838 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:28.838 list of memzone associated elements. size: 602.262573 MiB 00:04:28.838 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:28.838 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.838 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:28.838 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.838 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:28.838 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59957_0 00:04:28.838 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:28.838 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59957_0 00:04:28.838 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:28.838 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59957_0 00:04:28.838 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:28.838 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.838 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:28.838 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.838 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:28.838 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59957 00:04:28.838 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:28.838 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59957 00:04:28.838 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:28.838 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59957 00:04:28.838 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:28.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.838 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:28.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.838 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:28.838 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.838 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:28.838 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.838 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:28.838 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59957 00:04:28.838 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:28.838 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59957 00:04:28.838 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:28.838 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59957 00:04:28.838 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:28.838 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59957 00:04:28.838 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:28.838 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59957 00:04:28.838 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:28.838 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.838 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:28.838 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.838 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:28.838 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.838 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:28.838 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59957 00:04:28.838 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:28.838 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.838 element at address: 0x200027e65740 with size: 0.023743 MiB 00:04:28.838 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.838 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:28.838 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59957 00:04:28.838 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:04:28.838 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.838 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:28.838 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59957 00:04:28.838 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:28.838 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59957 00:04:28.838 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:04:28.838 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.838 09:30:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.838 09:30:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59957 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 59957 ']' 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 59957 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59957 00:04:28.838 killing process with pid 59957 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59957' 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 59957 00:04:28.838 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 59957 00:04:29.096 00:04:29.096 real 0m1.689s 00:04:29.096 user 0m1.853s 00:04:29.096 sys 0m0.418s 00:04:29.096 ************************************ 00:04:29.096 END TEST dpdk_mem_utility 00:04:29.096 ************************************ 00:04:29.096 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.096 09:30:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:29.353 09:30:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.353 09:30:23 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.353 09:30:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.353 09:30:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.353 09:30:23 -- common/autotest_common.sh@10 -- # set +x 00:04:29.353 ************************************ 00:04:29.353 START TEST event 00:04:29.353 ************************************ 00:04:29.353 09:30:23 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:29.353 * Looking for test storage... 00:04:29.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:29.353 09:30:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:29.353 09:30:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:29.353 09:30:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.353 09:30:23 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:29.353 09:30:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.353 09:30:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.353 ************************************ 00:04:29.353 START TEST event_perf 00:04:29.353 ************************************ 00:04:29.353 09:30:23 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.354 Running I/O for 1 seconds...[2024-07-15 09:30:23.704218] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:29.354 [2024-07-15 09:30:23.704297] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60033 ] 00:04:29.611 [2024-07-15 09:30:23.838655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.611 [2024-07-15 09:30:23.962875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.611 [2024-07-15 09:30:23.963027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.611 [2024-07-15 09:30:23.962954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.611 [2024-07-15 09:30:23.963038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.984 Running I/O for 1 seconds... 00:04:30.984 lcore 0: 180129 00:04:30.984 lcore 1: 180131 00:04:30.984 lcore 2: 180132 00:04:30.984 lcore 3: 180133 00:04:30.984 done. 00:04:30.984 ************************************ 00:04:30.984 END TEST event_perf 00:04:30.984 ************************************ 00:04:30.984 00:04:30.984 real 0m1.372s 00:04:30.984 user 0m4.185s 00:04:30.984 sys 0m0.060s 00:04:30.984 09:30:25 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.984 09:30:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:30.984 09:30:25 event -- common/autotest_common.sh@1142 -- # return 0 00:04:30.984 09:30:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.984 09:30:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:30.984 09:30:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.984 09:30:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.984 ************************************ 00:04:30.984 START TEST event_reactor 00:04:30.984 ************************************ 00:04:30.984 09:30:25 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:30.984 [2024-07-15 09:30:25.124402] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:30.984 [2024-07-15 09:30:25.124504] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60067 ] 00:04:30.984 [2024-07-15 09:30:25.259132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.984 [2024-07-15 09:30:25.381110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.380 test_start 00:04:32.380 oneshot 00:04:32.380 tick 100 00:04:32.380 tick 100 00:04:32.380 tick 250 00:04:32.380 tick 100 00:04:32.380 tick 100 00:04:32.380 tick 100 00:04:32.380 tick 250 00:04:32.380 tick 500 00:04:32.380 tick 100 00:04:32.380 tick 100 00:04:32.380 tick 250 00:04:32.380 tick 100 00:04:32.380 tick 100 00:04:32.380 test_end 00:04:32.380 00:04:32.380 real 0m1.369s 00:04:32.380 user 0m1.210s 00:04:32.380 sys 0m0.050s 00:04:32.380 09:30:26 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.380 ************************************ 00:04:32.380 END TEST event_reactor 00:04:32.380 ************************************ 00:04:32.380 09:30:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:32.380 09:30:26 event -- common/autotest_common.sh@1142 -- # return 0 00:04:32.380 09:30:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.380 09:30:26 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:32.380 09:30:26 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.380 09:30:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.380 ************************************ 00:04:32.380 START TEST event_reactor_perf 00:04:32.380 ************************************ 00:04:32.380 09:30:26 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:32.380 [2024-07-15 09:30:26.537747] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:32.380 [2024-07-15 09:30:26.537853] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60103 ] 00:04:32.380 [2024-07-15 09:30:26.671846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.380 [2024-07-15 09:30:26.789726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.756 test_start 00:04:33.756 test_end 00:04:33.756 Performance: 377468 events per second 00:04:33.756 00:04:33.756 real 0m1.351s 00:04:33.756 user 0m1.197s 00:04:33.756 sys 0m0.048s 00:04:33.756 09:30:27 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.756 ************************************ 00:04:33.756 END TEST event_reactor_perf 00:04:33.756 ************************************ 00:04:33.756 09:30:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:33.756 09:30:27 event -- common/autotest_common.sh@1142 -- # return 0 00:04:33.756 09:30:27 event -- event/event.sh@49 -- # uname -s 00:04:33.756 09:30:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.756 09:30:27 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.756 09:30:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.756 09:30:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.756 09:30:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.756 ************************************ 00:04:33.756 START TEST event_scheduler 00:04:33.756 ************************************ 00:04:33.756 09:30:27 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:33.756 * Looking for test storage... 00:04:33.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:33.756 09:30:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.756 09:30:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60164 00:04:33.756 09:30:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.756 09:30:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60164 00:04:33.756 09:30:28 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60164 ']' 00:04:33.756 09:30:28 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.756 09:30:28 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.756 09:30:28 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.756 09:30:28 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.756 09:30:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.756 09:30:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.756 [2024-07-15 09:30:28.054948] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:33.756 [2024-07-15 09:30:28.055044] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:04:33.756 [2024-07-15 09:30:28.190577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:34.014 [2024-07-15 09:30:28.310045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.014 [2024-07-15 09:30:28.310185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.014 [2024-07-15 09:30:28.310331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:34.014 [2024-07-15 09:30:28.310344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.581 09:30:29 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.581 09:30:29 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:34.581 09:30:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:34.581 09:30:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.581 09:30:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.581 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.581 POWER: Cannot set governor of lcore 0 to performance 00:04:34.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.581 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.581 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:34.581 POWER: Cannot set governor of lcore 0 to userspace 00:04:34.581 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:34.581 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:34.581 POWER: Unable to set Power Management Environment for lcore 0 00:04:34.581 [2024-07-15 09:30:29.018172] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:34.581 [2024-07-15 09:30:29.018272] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:34.581 [2024-07-15 09:30:29.018315] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:34.581 [2024-07-15 09:30:29.018402] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:34.581 [2024-07-15 09:30:29.018491] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:34.581 [2024-07-15 09:30:29.018586] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:34.581 09:30:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.581 09:30:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:34.581 09:30:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.581 09:30:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 [2024-07-15 09:30:29.077401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:34.839 [2024-07-15 09:30:29.111873] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:34.839 09:30:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:34.839 09:30:29 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.839 09:30:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 ************************************ 00:04:34.839 START TEST scheduler_create_thread 00:04:34.839 ************************************ 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 2 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 3 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 4 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 5 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 6 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 7 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 8 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 9 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.839 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.840 10 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:34.840 09:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:36.213 09:30:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:36.213 09:30:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:36.213 09:30:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:36.213 09:30:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:36.213 09:30:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.583 09:30:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.583 00:04:37.583 real 0m2.611s 00:04:37.583 user 0m0.014s 00:04:37.583 sys 0m0.005s 00:04:37.583 ************************************ 00:04:37.583 END TEST scheduler_create_thread 00:04:37.583 ************************************ 00:04:37.583 09:30:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.583 09:30:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:37.583 09:30:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:37.583 09:30:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60164 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60164 ']' 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60164 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60164 00:04:37.583 killing process with pid 60164 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60164' 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60164 00:04:37.583 09:30:31 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60164 00:04:37.841 [2024-07-15 09:30:32.212227] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:38.099 ************************************ 00:04:38.099 END TEST event_scheduler 00:04:38.099 ************************************ 00:04:38.099 00:04:38.099 real 0m4.526s 00:04:38.099 user 0m8.472s 00:04:38.099 sys 0m0.339s 00:04:38.099 09:30:32 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.099 09:30:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.099 09:30:32 event -- common/autotest_common.sh@1142 -- # return 0 00:04:38.099 09:30:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:38.099 09:30:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:38.099 09:30:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.099 09:30:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.099 09:30:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.099 ************************************ 00:04:38.099 START TEST app_repeat 00:04:38.099 ************************************ 00:04:38.099 09:30:32 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:38.099 Process app_repeat pid: 60258 00:04:38.099 spdk_app_start Round 0 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60258 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60258' 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:38.099 09:30:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60258 /var/tmp/spdk-nbd.sock 00:04:38.099 09:30:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60258 ']' 00:04:38.099 09:30:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.099 09:30:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.099 09:30:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.099 09:30:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.099 09:30:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.099 [2024-07-15 09:30:32.535257] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:38.099 [2024-07-15 09:30:32.535391] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60258 ] 00:04:38.371 [2024-07-15 09:30:32.677812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.371 [2024-07-15 09:30:32.795691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.371 [2024-07-15 09:30:32.795704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.630 [2024-07-15 09:30:32.849138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.196 09:30:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.196 09:30:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:39.196 09:30:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.454 Malloc0 00:04:39.454 09:30:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:39.712 Malloc1 00:04:39.712 09:30:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.712 09:30:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:39.713 09:30:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:39.713 09:30:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:39.713 09:30:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.713 09:30:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.970 /dev/nbd0 00:04:39.970 09:30:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.970 09:30:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.970 1+0 records in 00:04:39.970 1+0 records out 00:04:39.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592018 s, 6.9 MB/s 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:39.970 09:30:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:39.970 09:30:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.970 09:30:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.970 09:30:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.228 /dev/nbd1 00:04:40.228 09:30:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.228 09:30:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.228 1+0 records in 00:04:40.228 1+0 records out 00:04:40.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458057 s, 8.9 MB/s 00:04:40.228 09:30:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.229 09:30:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:40.229 09:30:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:40.229 09:30:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:40.229 09:30:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:40.229 09:30:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.229 09:30:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.229 09:30:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.229 09:30:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.229 09:30:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.487 { 00:04:40.487 "nbd_device": "/dev/nbd0", 00:04:40.487 "bdev_name": "Malloc0" 00:04:40.487 }, 00:04:40.487 { 00:04:40.487 "nbd_device": "/dev/nbd1", 00:04:40.487 "bdev_name": "Malloc1" 00:04:40.487 } 00:04:40.487 ]' 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.487 { 00:04:40.487 "nbd_device": "/dev/nbd0", 00:04:40.487 "bdev_name": "Malloc0" 00:04:40.487 }, 00:04:40.487 { 00:04:40.487 "nbd_device": "/dev/nbd1", 00:04:40.487 "bdev_name": "Malloc1" 00:04:40.487 } 00:04:40.487 ]' 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.487 /dev/nbd1' 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.487 /dev/nbd1' 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.487 256+0 records in 00:04:40.487 256+0 records out 00:04:40.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739315 s, 142 MB/s 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.487 256+0 records in 00:04:40.487 256+0 records out 00:04:40.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025985 s, 40.4 MB/s 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.487 09:30:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.744 256+0 records in 00:04:40.744 256+0 records out 00:04:40.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301751 s, 34.7 MB/s 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.744 09:30:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:40.745 09:30:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.745 09:30:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.745 09:30:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.745 09:30:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.745 09:30:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.745 09:30:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.745 09:30:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.002 09:30:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.260 09:30:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.517 09:30:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.518 09:30:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.777 09:30:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.037 [2024-07-15 09:30:36.361717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.037 [2024-07-15 09:30:36.479038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.037 [2024-07-15 09:30:36.479050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.296 [2024-07-15 09:30:36.533281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.296 [2024-07-15 09:30:36.533369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.296 [2024-07-15 09:30:36.533384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.839 spdk_app_start Round 1 00:04:44.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.839 09:30:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.839 09:30:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:44.839 09:30:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60258 /var/tmp/spdk-nbd.sock 00:04:44.839 09:30:39 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60258 ']' 00:04:44.839 09:30:39 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.839 09:30:39 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.839 09:30:39 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.839 09:30:39 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.839 09:30:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.097 09:30:39 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.097 09:30:39 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:45.097 09:30:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.355 Malloc0 00:04:45.355 09:30:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.612 Malloc1 00:04:45.612 09:30:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.612 09:30:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.869 /dev/nbd0 00:04:45.869 09:30:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.869 09:30:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.869 1+0 records in 00:04:45.869 1+0 records out 00:04:45.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553571 s, 7.4 MB/s 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:45.869 09:30:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:45.869 09:30:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.869 09:30:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.869 09:30:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.127 /dev/nbd1 00:04:46.127 09:30:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.127 09:30:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.127 1+0 records in 00:04:46.127 1+0 records out 00:04:46.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429927 s, 9.5 MB/s 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:46.127 09:30:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:46.127 09:30:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.127 09:30:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.127 09:30:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.127 09:30:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.128 09:30:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.386 { 00:04:46.386 "nbd_device": "/dev/nbd0", 00:04:46.386 "bdev_name": "Malloc0" 00:04:46.386 }, 00:04:46.386 { 00:04:46.386 "nbd_device": "/dev/nbd1", 00:04:46.386 "bdev_name": "Malloc1" 00:04:46.386 } 00:04:46.386 ]' 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.386 { 00:04:46.386 "nbd_device": "/dev/nbd0", 00:04:46.386 "bdev_name": "Malloc0" 00:04:46.386 }, 00:04:46.386 { 00:04:46.386 "nbd_device": "/dev/nbd1", 00:04:46.386 "bdev_name": "Malloc1" 00:04:46.386 } 00:04:46.386 ]' 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.386 /dev/nbd1' 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.386 /dev/nbd1' 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.386 256+0 records in 00:04:46.386 256+0 records out 00:04:46.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00837283 s, 125 MB/s 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.386 09:30:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.644 256+0 records in 00:04:46.644 256+0 records out 00:04:46.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264101 s, 39.7 MB/s 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.644 256+0 records in 00:04:46.644 256+0 records out 00:04:46.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266121 s, 39.4 MB/s 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.644 09:30:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.901 09:30:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.159 09:30:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.416 09:30:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.417 09:30:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.417 09:30:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.417 09:30:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.417 09:30:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.417 09:30:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.673 09:30:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.930 [2024-07-15 09:30:42.360107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.188 [2024-07-15 09:30:42.477176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.188 [2024-07-15 09:30:42.477187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.188 [2024-07-15 09:30:42.531151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:48.188 [2024-07-15 09:30:42.531247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.188 [2024-07-15 09:30:42.531262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.714 spdk_app_start Round 2 00:04:50.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.714 09:30:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.714 09:30:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:50.714 09:30:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60258 /var/tmp/spdk-nbd.sock 00:04:50.714 09:30:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60258 ']' 00:04:50.714 09:30:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.714 09:30:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.714 09:30:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.714 09:30:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.714 09:30:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.972 09:30:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.972 09:30:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:50.972 09:30:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.230 Malloc0 00:04:51.230 09:30:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.488 Malloc1 00:04:51.488 09:30:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.488 09:30:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.746 /dev/nbd0 00:04:52.048 09:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.048 09:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.048 1+0 records in 00:04:52.048 1+0 records out 00:04:52.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232002 s, 17.7 MB/s 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.048 09:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.049 /dev/nbd1 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.049 1+0 records in 00:04:52.049 1+0 records out 00:04:52.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316233 s, 13.0 MB/s 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.049 09:30:46 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.049 09:30:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.615 { 00:04:52.615 "nbd_device": "/dev/nbd0", 00:04:52.615 "bdev_name": "Malloc0" 00:04:52.615 }, 00:04:52.615 { 00:04:52.615 "nbd_device": "/dev/nbd1", 00:04:52.615 "bdev_name": "Malloc1" 00:04:52.615 } 00:04:52.615 ]' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.615 { 00:04:52.615 "nbd_device": "/dev/nbd0", 00:04:52.615 "bdev_name": "Malloc0" 00:04:52.615 }, 00:04:52.615 { 00:04:52.615 "nbd_device": "/dev/nbd1", 00:04:52.615 "bdev_name": "Malloc1" 00:04:52.615 } 00:04:52.615 ]' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.615 /dev/nbd1' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.615 /dev/nbd1' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.615 256+0 records in 00:04:52.615 256+0 records out 00:04:52.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581902 s, 180 MB/s 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.615 256+0 records in 00:04:52.615 256+0 records out 00:04:52.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275301 s, 38.1 MB/s 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.615 256+0 records in 00:04:52.615 256+0 records out 00:04:52.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257205 s, 40.8 MB/s 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.615 09:30:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.875 09:30:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.134 09:30:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.392 09:30:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.392 09:30:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.651 09:30:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.910 [2024-07-15 09:30:48.282484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.169 [2024-07-15 09:30:48.398617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.169 [2024-07-15 09:30:48.398630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.169 [2024-07-15 09:30:48.451975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:54.169 [2024-07-15 09:30:48.452069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.169 [2024-07-15 09:30:48.452084] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.697 09:30:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60258 /var/tmp/spdk-nbd.sock 00:04:56.697 09:30:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60258 ']' 00:04:56.697 09:30:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.697 09:30:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.697 09:30:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.697 09:30:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.697 09:30:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:56.956 09:30:51 event.app_repeat -- event/event.sh@39 -- # killprocess 60258 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60258 ']' 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60258 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60258 00:04:56.956 killing process with pid 60258 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60258' 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60258 00:04:56.956 09:30:51 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60258 00:04:57.214 spdk_app_start is called in Round 0. 00:04:57.214 Shutdown signal received, stop current app iteration 00:04:57.214 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:04:57.214 spdk_app_start is called in Round 1. 00:04:57.214 Shutdown signal received, stop current app iteration 00:04:57.214 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:04:57.214 spdk_app_start is called in Round 2. 00:04:57.214 Shutdown signal received, stop current app iteration 00:04:57.214 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:04:57.214 spdk_app_start is called in Round 3. 00:04:57.214 Shutdown signal received, stop current app iteration 00:04:57.214 ************************************ 00:04:57.214 END TEST app_repeat 00:04:57.214 ************************************ 00:04:57.214 09:30:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.214 09:30:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:57.214 00:04:57.214 real 0m19.069s 00:04:57.214 user 0m42.592s 00:04:57.214 sys 0m2.912s 00:04:57.214 09:30:51 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.214 09:30:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.214 09:30:51 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.214 09:30:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.214 09:30:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.214 09:30:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.214 09:30:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.214 09:30:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.214 ************************************ 00:04:57.214 START TEST cpu_locks 00:04:57.214 ************************************ 00:04:57.214 09:30:51 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:04:57.473 * Looking for test storage... 00:04:57.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.473 09:30:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:57.473 09:30:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:57.473 09:30:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:57.473 09:30:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:57.473 09:30:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.473 09:30:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.473 09:30:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.473 ************************************ 00:04:57.473 START TEST default_locks 00:04:57.473 ************************************ 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60691 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60691 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60691 ']' 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.473 09:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.473 [2024-07-15 09:30:51.768530] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:57.473 [2024-07-15 09:30:51.768618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60691 ] 00:04:57.473 [2024-07-15 09:30:51.901487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.732 [2024-07-15 09:30:52.019169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.732 [2024-07-15 09:30:52.072217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.298 09:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.298 09:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:58.298 09:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60691 00:04:58.298 09:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.298 09:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60691 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60691 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60691 ']' 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60691 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60691 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.864 killing process with pid 60691 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60691' 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60691 00:04:58.864 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60691 00:04:59.428 09:30:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60691 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60691 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60691 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60691 ']' 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.429 ERROR: process (pid: 60691) is no longer running 00:04:59.429 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60691) - No such process 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.429 00:04:59.429 real 0m1.945s 00:04:59.429 user 0m2.083s 00:04:59.429 sys 0m0.594s 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.429 ************************************ 00:04:59.429 END TEST default_locks 00:04:59.429 ************************************ 00:04:59.429 09:30:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.429 09:30:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:59.429 09:30:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.429 09:30:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.429 09:30:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.429 09:30:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.429 ************************************ 00:04:59.429 START TEST default_locks_via_rpc 00:04:59.429 ************************************ 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60743 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60743 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60743 ']' 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.429 09:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.429 [2024-07-15 09:30:53.767678] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:59.429 [2024-07-15 09:30:53.767780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:04:59.687 [2024-07-15 09:30:53.898763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.687 [2024-07-15 09:30:54.017099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.687 [2024-07-15 09:30:54.070710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60743 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.254 09:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60743 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60743 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60743 ']' 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60743 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60743 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.821 killing process with pid 60743 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60743' 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60743 00:05:00.821 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60743 00:05:01.387 00:05:01.387 real 0m1.852s 00:05:01.387 user 0m1.946s 00:05:01.387 sys 0m0.571s 00:05:01.387 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.387 09:30:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.387 ************************************ 00:05:01.387 END TEST default_locks_via_rpc 00:05:01.387 ************************************ 00:05:01.387 09:30:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:01.387 09:30:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:01.387 09:30:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.387 09:30:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.387 09:30:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.387 ************************************ 00:05:01.387 START TEST non_locking_app_on_locked_coremask 00:05:01.387 ************************************ 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60794 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60794 /var/tmp/spdk.sock 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60794 ']' 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.387 09:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.387 [2024-07-15 09:30:55.666866] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:01.387 [2024-07-15 09:30:55.667005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60794 ] 00:05:01.387 [2024-07-15 09:30:55.807156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.679 [2024-07-15 09:30:55.926685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.679 [2024-07-15 09:30:55.981447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:02.312 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.312 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60810 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60810 /var/tmp/spdk2.sock 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60810 ']' 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.313 09:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.313 [2024-07-15 09:30:56.646042] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:02.313 [2024-07-15 09:30:56.646571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60810 ] 00:05:02.570 [2024-07-15 09:30:56.787245] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:02.570 [2024-07-15 09:30:56.787306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.570 [2024-07-15 09:30:57.028899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.829 [2024-07-15 09:30:57.138652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.396 09:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.396 09:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:03.396 09:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60794 00:05:03.396 09:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60794 00:05:03.396 09:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60794 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60794 ']' 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60794 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60794 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60794' 00:05:03.963 killing process with pid 60794 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60794 00:05:03.963 09:30:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60794 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60810 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60810 ']' 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60810 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60810 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60810' 00:05:04.898 killing process with pid 60810 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60810 00:05:04.898 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60810 00:05:05.157 00:05:05.157 real 0m3.948s 00:05:05.157 user 0m4.325s 00:05:05.157 sys 0m1.064s 00:05:05.157 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.157 ************************************ 00:05:05.157 END TEST non_locking_app_on_locked_coremask 00:05:05.157 ************************************ 00:05:05.157 09:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.157 09:30:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:05.157 09:30:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:05.157 09:30:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.157 09:30:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.157 09:30:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.157 ************************************ 00:05:05.157 START TEST locking_app_on_unlocked_coremask 00:05:05.157 ************************************ 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60877 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60877 /var/tmp/spdk.sock 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60877 ']' 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.157 09:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.417 [2024-07-15 09:30:59.656647] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:05.417 [2024-07-15 09:30:59.656750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60877 ] 00:05:05.417 [2024-07-15 09:30:59.787422] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.417 [2024-07-15 09:30:59.787501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.675 [2024-07-15 09:30:59.905603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.675 [2024-07-15 09:30:59.958967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:06.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60893 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60893 /var/tmp/spdk2.sock 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60893 ']' 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.242 09:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.242 [2024-07-15 09:31:00.683572] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:06.242 [2024-07-15 09:31:00.683693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60893 ] 00:05:06.502 [2024-07-15 09:31:00.834350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.760 [2024-07-15 09:31:01.068962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.760 [2024-07-15 09:31:01.178302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.325 09:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.325 09:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.325 09:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60893 00:05:07.325 09:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60893 00:05:07.325 09:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.890 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60877 00:05:07.890 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60877 ']' 00:05:07.890 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60877 00:05:07.890 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:07.890 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.890 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60877 00:05:07.890 killing process with pid 60877 00:05:07.890 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.891 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.891 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60877' 00:05:07.891 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60877 00:05:07.891 09:31:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60877 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60893 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60893 ']' 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60893 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60893 00:05:08.823 killing process with pid 60893 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60893' 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60893 00:05:08.823 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60893 00:05:09.082 00:05:09.082 real 0m3.914s 00:05:09.082 user 0m4.367s 00:05:09.082 sys 0m1.045s 00:05:09.082 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.082 09:31:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.082 ************************************ 00:05:09.082 END TEST locking_app_on_unlocked_coremask 00:05:09.082 ************************************ 00:05:09.082 09:31:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:09.082 09:31:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:09.082 09:31:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.082 09:31:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.082 09:31:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.339 ************************************ 00:05:09.339 START TEST locking_app_on_locked_coremask 00:05:09.339 ************************************ 00:05:09.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60960 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60960 /var/tmp/spdk.sock 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60960 ']' 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.339 09:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.339 [2024-07-15 09:31:03.611146] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:09.339 [2024-07-15 09:31:03.611241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60960 ] 00:05:09.339 [2024-07-15 09:31:03.744122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.596 [2024-07-15 09:31:03.862338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.596 [2024-07-15 09:31:03.916587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60976 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60976 /var/tmp/spdk2.sock 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60976 /var/tmp/spdk2.sock 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:10.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60976 /var/tmp/spdk2.sock 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60976 ']' 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.161 09:31:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.418 [2024-07-15 09:31:04.653500] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:10.419 [2024-07-15 09:31:04.653606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60976 ] 00:05:10.419 [2024-07-15 09:31:04.799438] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60960 has claimed it. 00:05:10.419 [2024-07-15 09:31:04.799530] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.982 ERROR: process (pid: 60976) is no longer running 00:05:10.982 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60976) - No such process 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60960 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60960 00:05:10.982 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60960 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60960 ']' 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60960 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60960 00:05:11.544 killing process with pid 60960 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60960' 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60960 00:05:11.544 09:31:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60960 00:05:11.801 00:05:11.801 real 0m2.602s 00:05:11.801 user 0m3.032s 00:05:11.801 sys 0m0.592s 00:05:11.801 ************************************ 00:05:11.801 END TEST locking_app_on_locked_coremask 00:05:11.801 ************************************ 00:05:11.801 09:31:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.801 09:31:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.801 09:31:06 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:11.801 09:31:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:11.801 09:31:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.801 09:31:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.801 09:31:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.801 ************************************ 00:05:11.801 START TEST locking_overlapped_coremask 00:05:11.801 ************************************ 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61022 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61022 /var/tmp/spdk.sock 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61022 ']' 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.801 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.801 [2024-07-15 09:31:06.259450] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:11.801 [2024-07-15 09:31:06.259548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61022 ] 00:05:12.057 [2024-07-15 09:31:06.393893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.057 [2024-07-15 09:31:06.510391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.057 [2024-07-15 09:31:06.510519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.057 [2024-07-15 09:31:06.510524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.314 [2024-07-15 09:31:06.565663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61032 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61032 /var/tmp/spdk2.sock 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61032 /var/tmp/spdk2.sock 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61032 /var/tmp/spdk2.sock 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61032 ']' 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.314 09:31:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.572 [2024-07-15 09:31:06.830009] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:12.572 [2024-07-15 09:31:06.830123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61032 ] 00:05:12.572 [2024-07-15 09:31:06.980613] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61022 has claimed it. 00:05:12.572 [2024-07-15 09:31:06.980682] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.136 ERROR: process (pid: 61032) is no longer running 00:05:13.136 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61032) - No such process 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61022 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61022 ']' 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61022 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61022 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61022' 00:05:13.137 killing process with pid 61022 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61022 00:05:13.137 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61022 00:05:13.703 00:05:13.703 real 0m1.737s 00:05:13.703 user 0m4.509s 00:05:13.703 sys 0m0.416s 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.703 ************************************ 00:05:13.703 END TEST locking_overlapped_coremask 00:05:13.703 ************************************ 00:05:13.703 09:31:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.703 09:31:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:13.703 09:31:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.703 09:31:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.703 09:31:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.703 ************************************ 00:05:13.703 START TEST locking_overlapped_coremask_via_rpc 00:05:13.703 ************************************ 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61072 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61072 /var/tmp/spdk.sock 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61072 ']' 00:05:13.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.703 09:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.703 [2024-07-15 09:31:08.051152] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:13.703 [2024-07-15 09:31:08.051265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61072 ] 00:05:13.962 [2024-07-15 09:31:08.187888] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.962 [2024-07-15 09:31:08.187947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.962 [2024-07-15 09:31:08.304150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.962 [2024-07-15 09:31:08.304229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.962 [2024-07-15 09:31:08.304232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.962 [2024-07-15 09:31:08.359506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61090 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61090 /var/tmp/spdk2.sock 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61090 ']' 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.904 09:31:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.904 [2024-07-15 09:31:09.068958] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:14.904 [2024-07-15 09:31:09.069370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61090 ] 00:05:14.904 [2024-07-15 09:31:09.212371] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.904 [2024-07-15 09:31:09.212410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.162 [2024-07-15 09:31:09.469953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.162 [2024-07-15 09:31:09.470077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.162 [2024-07-15 09:31:09.470078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:15.162 [2024-07-15 09:31:09.577971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:15.726 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.726 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.726 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.727 [2024-07-15 09:31:10.094016] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61072 has claimed it. 00:05:15.727 request: 00:05:15.727 { 00:05:15.727 "method": "framework_enable_cpumask_locks", 00:05:15.727 "req_id": 1 00:05:15.727 } 00:05:15.727 Got JSON-RPC error response 00:05:15.727 response: 00:05:15.727 { 00:05:15.727 "code": -32603, 00:05:15.727 "message": "Failed to claim CPU core: 2" 00:05:15.727 } 00:05:15.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61072 /var/tmp/spdk.sock 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61072 ']' 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.727 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61090 /var/tmp/spdk2.sock 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61090 ']' 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.984 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.241 00:05:16.241 real 0m2.683s 00:05:16.241 user 0m1.378s 00:05:16.241 sys 0m0.219s 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.241 09:31:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.241 ************************************ 00:05:16.241 END TEST locking_overlapped_coremask_via_rpc 00:05:16.241 ************************************ 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:16.499 09:31:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:16.499 09:31:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61072 ]] 00:05:16.499 09:31:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61072 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61072 ']' 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61072 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61072 00:05:16.499 killing process with pid 61072 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61072' 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61072 00:05:16.499 09:31:10 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61072 00:05:16.756 09:31:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61090 ]] 00:05:16.756 09:31:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61090 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61090 ']' 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61090 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61090 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61090' 00:05:16.756 killing process with pid 61090 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61090 00:05:16.756 09:31:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61090 00:05:17.321 09:31:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.321 09:31:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:17.321 09:31:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61072 ]] 00:05:17.321 Process with pid 61072 is not found 00:05:17.321 Process with pid 61090 is not found 00:05:17.321 09:31:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61072 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61072 ']' 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61072 00:05:17.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61072) - No such process 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61072 is not found' 00:05:17.321 09:31:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61090 ]] 00:05:17.321 09:31:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61090 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61090 ']' 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61090 00:05:17.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61090) - No such process 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61090 is not found' 00:05:17.321 09:31:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.321 ************************************ 00:05:17.321 END TEST cpu_locks 00:05:17.321 ************************************ 00:05:17.321 00:05:17.321 real 0m19.931s 00:05:17.321 user 0m34.152s 00:05:17.321 sys 0m5.321s 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.321 09:31:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.321 09:31:11 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.321 ************************************ 00:05:17.321 END TEST event 00:05:17.321 ************************************ 00:05:17.321 00:05:17.321 real 0m47.992s 00:05:17.321 user 1m31.941s 00:05:17.321 sys 0m8.953s 00:05:17.321 09:31:11 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.321 09:31:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.321 09:31:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.321 09:31:11 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:17.321 09:31:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.321 09:31:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.321 09:31:11 -- common/autotest_common.sh@10 -- # set +x 00:05:17.321 ************************************ 00:05:17.321 START TEST thread 00:05:17.321 ************************************ 00:05:17.321 09:31:11 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:17.321 * Looking for test storage... 00:05:17.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:17.321 09:31:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.321 09:31:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:17.321 09:31:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.321 09:31:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.321 ************************************ 00:05:17.321 START TEST thread_poller_perf 00:05:17.321 ************************************ 00:05:17.321 09:31:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.321 [2024-07-15 09:31:11.735211] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:17.321 [2024-07-15 09:31:11.735485] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61218 ] 00:05:17.579 [2024-07-15 09:31:11.871048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.579 [2024-07-15 09:31:11.986804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.579 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:18.949 ====================================== 00:05:18.949 busy:2210470620 (cyc) 00:05:18.949 total_run_count: 318000 00:05:18.949 tsc_hz: 2200000000 (cyc) 00:05:18.949 ====================================== 00:05:18.949 poller_cost: 6951 (cyc), 3159 (nsec) 00:05:18.949 00:05:18.949 real 0m1.366s 00:05:18.949 ************************************ 00:05:18.949 END TEST thread_poller_perf 00:05:18.949 ************************************ 00:05:18.949 user 0m1.205s 00:05:18.949 sys 0m0.052s 00:05:18.949 09:31:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.949 09:31:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.949 09:31:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:18.949 09:31:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.949 09:31:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:18.949 09:31:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.949 09:31:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.949 ************************************ 00:05:18.949 START TEST thread_poller_perf 00:05:18.949 ************************************ 00:05:18.949 09:31:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.949 [2024-07-15 09:31:13.151210] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:18.949 [2024-07-15 09:31:13.151321] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61248 ] 00:05:18.949 [2024-07-15 09:31:13.289717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.206 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:19.206 [2024-07-15 09:31:13.418058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.141 ====================================== 00:05:20.141 busy:2202654759 (cyc) 00:05:20.141 total_run_count: 4017000 00:05:20.141 tsc_hz: 2200000000 (cyc) 00:05:20.141 ====================================== 00:05:20.141 poller_cost: 548 (cyc), 249 (nsec) 00:05:20.141 00:05:20.141 real 0m1.372s 00:05:20.141 user 0m1.204s 00:05:20.141 sys 0m0.059s 00:05:20.141 09:31:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.141 ************************************ 00:05:20.141 END TEST thread_poller_perf 00:05:20.141 09:31:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.141 ************************************ 00:05:20.141 09:31:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:20.141 09:31:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:20.141 ************************************ 00:05:20.141 END TEST thread 00:05:20.141 ************************************ 00:05:20.141 00:05:20.141 real 0m2.915s 00:05:20.141 user 0m2.477s 00:05:20.141 sys 0m0.215s 00:05:20.141 09:31:14 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.141 09:31:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.141 09:31:14 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.141 09:31:14 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:20.141 09:31:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.141 09:31:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.141 09:31:14 -- common/autotest_common.sh@10 -- # set +x 00:05:20.141 ************************************ 00:05:20.141 START TEST accel 00:05:20.141 ************************************ 00:05:20.141 09:31:14 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:20.399 * Looking for test storage... 00:05:20.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:20.399 09:31:14 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:20.399 09:31:14 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:20.399 09:31:14 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.399 09:31:14 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61323 00:05:20.399 09:31:14 accel -- accel/accel.sh@63 -- # waitforlisten 61323 00:05:20.399 09:31:14 accel -- common/autotest_common.sh@829 -- # '[' -z 61323 ']' 00:05:20.399 09:31:14 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.399 09:31:14 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.399 09:31:14 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:20.399 09:31:14 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:20.399 09:31:14 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.399 09:31:14 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.399 09:31:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.399 09:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.399 09:31:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.399 09:31:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.399 09:31:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.399 09:31:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.399 09:31:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:20.399 09:31:14 accel -- accel/accel.sh@41 -- # jq -r . 00:05:20.399 [2024-07-15 09:31:14.765189] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:20.399 [2024-07-15 09:31:14.765474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61323 ] 00:05:20.657 [2024-07-15 09:31:14.908261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.657 [2024-07-15 09:31:15.024415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.657 [2024-07-15 09:31:15.079813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.589 09:31:15 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.589 09:31:15 accel -- common/autotest_common.sh@862 -- # return 0 00:05:21.589 09:31:15 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:21.589 09:31:15 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:21.589 09:31:15 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:21.589 09:31:15 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:21.589 09:31:15 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:21.589 09:31:15 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:21.589 09:31:15 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.589 09:31:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.589 09:31:15 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:21.589 09:31:15 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.589 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.589 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.589 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.590 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.590 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.590 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.590 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.590 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.590 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.590 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.590 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.590 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.590 09:31:15 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:21.590 09:31:15 accel -- accel/accel.sh@72 -- # IFS== 00:05:21.590 09:31:15 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:21.590 09:31:15 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:21.590 09:31:15 accel -- accel/accel.sh@75 -- # killprocess 61323 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@948 -- # '[' -z 61323 ']' 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@952 -- # kill -0 61323 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@953 -- # uname 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61323 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61323' 00:05:21.590 killing process with pid 61323 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@967 -- # kill 61323 00:05:21.590 09:31:15 accel -- common/autotest_common.sh@972 -- # wait 61323 00:05:21.848 09:31:16 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:21.848 09:31:16 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:21.848 09:31:16 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:21.848 09:31:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.848 09:31:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.848 09:31:16 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:21.848 09:31:16 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:21.848 09:31:16 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.848 09:31:16 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:21.848 09:31:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.848 09:31:16 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:21.848 09:31:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:21.848 09:31:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.848 09:31:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.106 ************************************ 00:05:22.106 START TEST accel_missing_filename 00:05:22.106 ************************************ 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.106 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:22.106 09:31:16 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:22.106 [2024-07-15 09:31:16.344822] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:22.106 [2024-07-15 09:31:16.344949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61374 ] 00:05:22.106 [2024-07-15 09:31:16.485394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.364 [2024-07-15 09:31:16.612511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.364 [2024-07-15 09:31:16.669681] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.364 [2024-07-15 09:31:16.746259] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:22.623 A filename is required. 00:05:22.623 ************************************ 00:05:22.623 END TEST accel_missing_filename 00:05:22.623 ************************************ 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.623 00:05:22.623 real 0m0.516s 00:05:22.623 user 0m0.328s 00:05:22.623 sys 0m0.121s 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.623 09:31:16 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:22.623 09:31:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.623 09:31:16 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.623 09:31:16 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:22.623 09:31:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.623 09:31:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.623 ************************************ 00:05:22.623 START TEST accel_compress_verify 00:05:22.623 ************************************ 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.623 09:31:16 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:22.623 09:31:16 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:22.623 [2024-07-15 09:31:16.906649] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:22.623 [2024-07-15 09:31:16.906727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61404 ] 00:05:22.623 [2024-07-15 09:31:17.042400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.881 [2024-07-15 09:31:17.158777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.881 [2024-07-15 09:31:17.215039] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.881 [2024-07-15 09:31:17.291657] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:23.139 00:05:23.139 Compression does not support the verify option, aborting. 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.139 00:05:23.139 real 0m0.503s 00:05:23.139 user 0m0.338s 00:05:23.139 sys 0m0.107s 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.139 ************************************ 00:05:23.139 END TEST accel_compress_verify 00:05:23.139 ************************************ 00:05:23.139 09:31:17 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:23.139 09:31:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.139 09:31:17 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:23.139 09:31:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:23.139 09:31:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.139 09:31:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.139 ************************************ 00:05:23.139 START TEST accel_wrong_workload 00:05:23.139 ************************************ 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.139 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:23.139 09:31:17 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:23.139 Unsupported workload type: foobar 00:05:23.139 [2024-07-15 09:31:17.458112] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:23.139 accel_perf options: 00:05:23.139 [-h help message] 00:05:23.139 [-q queue depth per core] 00:05:23.139 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:23.139 [-T number of threads per core 00:05:23.139 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:23.139 [-t time in seconds] 00:05:23.139 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:23.139 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:23.139 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:23.139 [-l for compress/decompress workloads, name of uncompressed input file 00:05:23.140 [-S for crc32c workload, use this seed value (default 0) 00:05:23.140 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:23.140 [-f for fill workload, use this BYTE value (default 255) 00:05:23.140 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:23.140 [-y verify result if this switch is on] 00:05:23.140 [-a tasks to allocate per core (default: same value as -q)] 00:05:23.140 Can be used to spread operations across a wider range of memory. 00:05:23.140 ************************************ 00:05:23.140 END TEST accel_wrong_workload 00:05:23.140 ************************************ 00:05:23.140 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:23.140 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.140 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.140 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.140 00:05:23.140 real 0m0.034s 00:05:23.140 user 0m0.022s 00:05:23.140 sys 0m0.012s 00:05:23.140 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.140 09:31:17 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.140 09:31:17 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.140 ************************************ 00:05:23.140 START TEST accel_negative_buffers 00:05:23.140 ************************************ 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:23.140 09:31:17 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:23.140 -x option must be non-negative. 00:05:23.140 [2024-07-15 09:31:17.533759] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:23.140 accel_perf options: 00:05:23.140 [-h help message] 00:05:23.140 [-q queue depth per core] 00:05:23.140 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:23.140 [-T number of threads per core 00:05:23.140 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:23.140 [-t time in seconds] 00:05:23.140 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:23.140 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:23.140 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:23.140 [-l for compress/decompress workloads, name of uncompressed input file 00:05:23.140 [-S for crc32c workload, use this seed value (default 0) 00:05:23.140 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:23.140 [-f for fill workload, use this BYTE value (default 255) 00:05:23.140 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:23.140 [-y verify result if this switch is on] 00:05:23.140 [-a tasks to allocate per core (default: same value as -q)] 00:05:23.140 Can be used to spread operations across a wider range of memory. 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.140 ************************************ 00:05:23.140 END TEST accel_negative_buffers 00:05:23.140 ************************************ 00:05:23.140 00:05:23.140 real 0m0.030s 00:05:23.140 user 0m0.015s 00:05:23.140 sys 0m0.013s 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.140 09:31:17 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.140 09:31:17 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.140 09:31:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.140 ************************************ 00:05:23.140 START TEST accel_crc32c 00:05:23.140 ************************************ 00:05:23.140 09:31:17 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:23.140 09:31:17 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:23.397 [2024-07-15 09:31:17.613121] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:23.397 [2024-07-15 09:31:17.613205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61463 ] 00:05:23.397 [2024-07-15 09:31:17.748164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.654 [2024-07-15 09:31:17.874016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.654 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:23.655 09:31:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:25.069 09:31:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.069 00:05:25.069 real 0m1.520s 00:05:25.069 user 0m1.304s 00:05:25.069 sys 0m0.119s 00:05:25.069 09:31:19 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.069 09:31:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 ************************************ 00:05:25.069 END TEST accel_crc32c 00:05:25.069 ************************************ 00:05:25.069 09:31:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.069 09:31:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:25.069 09:31:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.069 09:31:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.069 09:31:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.069 ************************************ 00:05:25.069 START TEST accel_crc32c_C2 00:05:25.069 ************************************ 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:25.069 [2024-07-15 09:31:19.183012] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:25.069 [2024-07-15 09:31:19.183099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:05:25.069 [2024-07-15 09:31:19.322077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.069 [2024-07-15 09:31:19.435505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:25.069 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.070 09:31:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.443 ************************************ 00:05:26.443 END TEST accel_crc32c_C2 00:05:26.443 ************************************ 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.443 00:05:26.443 real 0m1.509s 00:05:26.443 user 0m1.302s 00:05:26.443 sys 0m0.110s 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.443 09:31:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:26.443 09:31:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.443 09:31:20 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:26.443 09:31:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:26.443 09:31:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.443 09:31:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.443 ************************************ 00:05:26.443 START TEST accel_copy 00:05:26.443 ************************************ 00:05:26.443 09:31:20 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:26.443 09:31:20 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:26.443 [2024-07-15 09:31:20.743631] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:26.443 [2024-07-15 09:31:20.743721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61532 ] 00:05:26.443 [2024-07-15 09:31:20.882895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.700 [2024-07-15 09:31:21.004599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:26.700 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:26.701 09:31:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:28.070 09:31:22 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.070 00:05:28.070 real 0m1.514s 00:05:28.070 user 0m1.296s 00:05:28.070 sys 0m0.123s 00:05:28.070 ************************************ 00:05:28.070 END TEST accel_copy 00:05:28.070 09:31:22 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.070 09:31:22 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:28.070 ************************************ 00:05:28.070 09:31:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.070 09:31:22 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.070 09:31:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:28.070 09:31:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.070 09:31:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.070 ************************************ 00:05:28.070 START TEST accel_fill 00:05:28.070 ************************************ 00:05:28.070 09:31:22 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:28.070 09:31:22 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:28.070 [2024-07-15 09:31:22.307770] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:28.070 [2024-07-15 09:31:22.307874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61566 ] 00:05:28.070 [2024-07-15 09:31:22.445712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.328 [2024-07-15 09:31:22.559905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.328 09:31:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 ************************************ 00:05:29.702 END TEST accel_fill 00:05:29.702 ************************************ 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:29.702 09:31:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.702 00:05:29.702 real 0m1.504s 00:05:29.702 user 0m1.297s 00:05:29.702 sys 0m0.114s 00:05:29.702 09:31:23 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.702 09:31:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:29.702 09:31:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.702 09:31:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:29.702 09:31:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:29.702 09:31:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.702 09:31:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.702 ************************************ 00:05:29.702 START TEST accel_copy_crc32c 00:05:29.702 ************************************ 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:29.702 09:31:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:29.702 [2024-07-15 09:31:23.862070] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:29.702 [2024-07-15 09:31:23.862149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61601 ] 00:05:29.702 [2024-07-15 09:31:24.000074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.702 [2024-07-15 09:31:24.120308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.960 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:29.961 09:31:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.911 ************************************ 00:05:30.911 END TEST accel_copy_crc32c 00:05:30.911 ************************************ 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.911 00:05:30.911 real 0m1.518s 00:05:30.911 user 0m1.300s 00:05:30.911 sys 0m0.126s 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.911 09:31:25 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:31.170 09:31:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.170 09:31:25 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:31.170 09:31:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:31.170 09:31:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.170 09:31:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.170 ************************************ 00:05:31.170 START TEST accel_copy_crc32c_C2 00:05:31.170 ************************************ 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.170 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:31.170 [2024-07-15 09:31:25.433033] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:31.170 [2024-07-15 09:31:25.433115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61635 ] 00:05:31.170 [2024-07-15 09:31:25.566303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.428 [2024-07-15 09:31:25.688099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:31.428 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 09:31:25 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:32.829 ************************************ 00:05:32.829 END TEST accel_copy_crc32c_C2 00:05:32.829 ************************************ 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.829 00:05:32.829 real 0m1.518s 00:05:32.829 user 0m1.312s 00:05:32.829 sys 0m0.112s 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.829 09:31:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:32.829 09:31:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.829 09:31:26 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:32.829 09:31:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:32.829 09:31:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.829 09:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.829 ************************************ 00:05:32.829 START TEST accel_dualcast 00:05:32.829 ************************************ 00:05:32.829 09:31:26 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:32.829 09:31:26 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:32.829 [2024-07-15 09:31:26.999876] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:32.829 [2024-07-15 09:31:26.999998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61670 ] 00:05:32.829 [2024-07-15 09:31:27.133680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.829 [2024-07-15 09:31:27.254384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.087 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.088 09:31:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.088 09:31:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.088 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.088 09:31:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.021 09:31:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.021 09:31:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.021 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.021 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.021 09:31:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.021 09:31:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:34.022 09:31:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.022 00:05:34.022 real 0m1.503s 00:05:34.022 user 0m1.288s 00:05:34.022 sys 0m0.120s 00:05:34.022 ************************************ 00:05:34.022 END TEST accel_dualcast 00:05:34.022 ************************************ 00:05:34.022 09:31:28 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.022 09:31:28 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:34.280 09:31:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.280 09:31:28 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:34.280 09:31:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.280 09:31:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.280 09:31:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.280 ************************************ 00:05:34.280 START TEST accel_compare 00:05:34.280 ************************************ 00:05:34.280 09:31:28 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:34.280 09:31:28 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:34.280 [2024-07-15 09:31:28.555174] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:34.280 [2024-07-15 09:31:28.555280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61704 ] 00:05:34.280 [2024-07-15 09:31:28.692542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.539 [2024-07-15 09:31:28.822100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.539 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.540 09:31:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:35.913 09:31:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.913 00:05:35.913 real 0m1.530s 00:05:35.913 user 0m1.309s 00:05:35.913 sys 0m0.127s 00:05:35.913 09:31:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.913 09:31:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:35.913 ************************************ 00:05:35.913 END TEST accel_compare 00:05:35.913 ************************************ 00:05:35.913 09:31:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.913 09:31:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:35.913 09:31:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:35.913 09:31:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.913 09:31:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.913 ************************************ 00:05:35.913 START TEST accel_xor 00:05:35.913 ************************************ 00:05:35.913 09:31:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:35.913 09:31:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:35.913 [2024-07-15 09:31:30.137993] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:35.913 [2024-07-15 09:31:30.138082] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61739 ] 00:05:35.913 [2024-07-15 09:31:30.274136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.171 [2024-07-15 09:31:30.394000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.171 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.172 09:31:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.545 ************************************ 00:05:37.545 END TEST accel_xor 00:05:37.545 ************************************ 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.545 00:05:37.545 real 0m1.512s 00:05:37.545 user 0m1.298s 00:05:37.545 sys 0m0.118s 00:05:37.545 09:31:31 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.545 09:31:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:37.545 09:31:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.545 09:31:31 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:37.545 09:31:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:37.545 09:31:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.545 09:31:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.545 ************************************ 00:05:37.545 START TEST accel_xor 00:05:37.545 ************************************ 00:05:37.545 09:31:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:37.545 09:31:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:37.545 [2024-07-15 09:31:31.690528] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:37.545 [2024-07-15 09:31:31.690607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61779 ] 00:05:37.545 [2024-07-15 09:31:31.828354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.545 [2024-07-15 09:31:31.959311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.803 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.803 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.804 09:31:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.739 ************************************ 00:05:38.739 END TEST accel_xor 00:05:38.739 ************************************ 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:38.739 09:31:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.739 00:05:38.739 real 0m1.519s 00:05:38.739 user 0m1.307s 00:05:38.739 sys 0m0.117s 00:05:38.739 09:31:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.739 09:31:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:39.013 09:31:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.013 09:31:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:39.013 09:31:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:39.013 09:31:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.013 09:31:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.013 ************************************ 00:05:39.013 START TEST accel_dif_verify 00:05:39.013 ************************************ 00:05:39.013 09:31:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:39.013 09:31:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:39.013 [2024-07-15 09:31:33.261462] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:39.013 [2024-07-15 09:31:33.261598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61808 ] 00:05:39.013 [2024-07-15 09:31:33.407773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.272 [2024-07-15 09:31:33.549977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:39.272 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.273 09:31:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:40.649 09:31:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.649 00:05:40.649 real 0m1.553s 00:05:40.649 user 0m1.328s 00:05:40.649 sys 0m0.131s 00:05:40.649 09:31:34 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.649 ************************************ 00:05:40.649 END TEST accel_dif_verify 00:05:40.649 ************************************ 00:05:40.649 09:31:34 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:40.649 09:31:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.649 09:31:34 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:40.649 09:31:34 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:40.649 09:31:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.649 09:31:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.649 ************************************ 00:05:40.649 START TEST accel_dif_generate 00:05:40.649 ************************************ 00:05:40.649 09:31:34 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:40.649 09:31:34 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:40.649 [2024-07-15 09:31:34.860781] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:40.650 [2024-07-15 09:31:34.860876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61848 ] 00:05:40.650 [2024-07-15 09:31:34.998501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.909 [2024-07-15 09:31:35.123231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:40.909 09:31:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:42.317 09:31:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.317 00:05:42.317 real 0m1.531s 00:05:42.317 user 0m1.307s 00:05:42.317 sys 0m0.132s 00:05:42.317 09:31:36 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.317 ************************************ 00:05:42.317 END TEST accel_dif_generate 00:05:42.317 ************************************ 00:05:42.317 09:31:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:42.317 09:31:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.317 09:31:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:42.317 09:31:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.317 09:31:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.317 09:31:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.317 ************************************ 00:05:42.317 START TEST accel_dif_generate_copy 00:05:42.317 ************************************ 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:42.317 [2024-07-15 09:31:36.442170] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:42.317 [2024-07-15 09:31:36.442254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61877 ] 00:05:42.317 [2024-07-15 09:31:36.578460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.317 [2024-07-15 09:31:36.682133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.317 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.318 09:31:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.693 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.693 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.694 00:05:43.694 real 0m1.503s 00:05:43.694 user 0m0.015s 00:05:43.694 sys 0m0.004s 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.694 ************************************ 00:05:43.694 END TEST accel_dif_generate_copy 00:05:43.694 ************************************ 00:05:43.694 09:31:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:43.694 09:31:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.694 09:31:37 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:43.694 09:31:37 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.694 09:31:37 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:43.694 09:31:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.694 09:31:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.694 ************************************ 00:05:43.694 START TEST accel_comp 00:05:43.694 ************************************ 00:05:43.694 09:31:37 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:43.694 09:31:37 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:43.694 [2024-07-15 09:31:37.991079] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:43.694 [2024-07-15 09:31:37.991168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61917 ] 00:05:43.694 [2024-07-15 09:31:38.122339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.954 [2024-07-15 09:31:38.241609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:43.954 09:31:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:45.366 09:31:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.366 00:05:45.366 real 0m1.508s 00:05:45.366 user 0m1.301s 00:05:45.366 sys 0m0.115s 00:05:45.366 09:31:39 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.366 ************************************ 00:05:45.366 END TEST accel_comp 00:05:45.366 ************************************ 00:05:45.366 09:31:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:45.366 09:31:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.366 09:31:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.366 09:31:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.366 09:31:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.366 09:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.366 ************************************ 00:05:45.366 START TEST accel_decomp 00:05:45.366 ************************************ 00:05:45.366 09:31:39 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:45.366 09:31:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:45.366 [2024-07-15 09:31:39.548295] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:45.366 [2024-07-15 09:31:39.548399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61946 ] 00:05:45.366 [2024-07-15 09:31:39.686448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.366 [2024-07-15 09:31:39.813194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.625 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.626 09:31:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.000 09:31:41 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.000 00:05:47.000 real 0m1.527s 00:05:47.000 user 0m1.320s 00:05:47.000 sys 0m0.114s 00:05:47.000 09:31:41 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.000 ************************************ 00:05:47.000 END TEST accel_decomp 00:05:47.000 ************************************ 00:05:47.000 09:31:41 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:47.000 09:31:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.000 09:31:41 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:47.000 09:31:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:47.000 09:31:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.000 09:31:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.000 ************************************ 00:05:47.000 START TEST accel_decomp_full 00:05:47.000 ************************************ 00:05:47.000 09:31:41 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:47.000 09:31:41 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:47.000 [2024-07-15 09:31:41.128226] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:47.000 [2024-07-15 09:31:41.128323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61986 ] 00:05:47.000 [2024-07-15 09:31:41.267080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.000 [2024-07-15 09:31:41.385726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.001 09:31:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.375 09:31:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.375 09:31:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.375 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.375 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.375 09:31:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.376 09:31:42 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.376 00:05:48.376 real 0m1.542s 00:05:48.376 user 0m1.323s 00:05:48.376 sys 0m0.126s 00:05:48.376 09:31:42 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.376 09:31:42 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:48.376 ************************************ 00:05:48.376 END TEST accel_decomp_full 00:05:48.376 ************************************ 00:05:48.376 09:31:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.376 09:31:42 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.376 09:31:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:48.376 09:31:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.376 09:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.376 ************************************ 00:05:48.376 START TEST accel_decomp_mcore 00:05:48.376 ************************************ 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:48.376 09:31:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:48.376 [2024-07-15 09:31:42.724701] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:48.376 [2024-07-15 09:31:42.724853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62017 ] 00:05:48.634 [2024-07-15 09:31:42.862941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.634 [2024-07-15 09:31:42.990680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.634 [2024-07-15 09:31:42.990826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.634 [2024-07-15 09:31:42.991652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.634 [2024-07-15 09:31:42.991661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:48.634 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.635 09:31:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.007 00:05:50.007 real 0m1.537s 00:05:50.007 user 0m0.016s 00:05:50.007 sys 0m0.001s 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.007 09:31:44 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:50.007 ************************************ 00:05:50.007 END TEST accel_decomp_mcore 00:05:50.007 ************************************ 00:05:50.007 09:31:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.007 09:31:44 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.007 09:31:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:50.007 09:31:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.007 09:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.007 ************************************ 00:05:50.007 START TEST accel_decomp_full_mcore 00:05:50.007 ************************************ 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:50.007 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:50.007 [2024-07-15 09:31:44.307789] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:50.007 [2024-07-15 09:31:44.307881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62060 ] 00:05:50.007 [2024-07-15 09:31:44.442665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.264 [2024-07-15 09:31:44.567627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.264 [2024-07-15 09:31:44.567771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.264 [2024-07-15 09:31:44.567995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.264 [2024-07-15 09:31:44.568096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.264 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.265 09:31:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 ************************************ 00:05:51.696 END TEST accel_decomp_full_mcore 00:05:51.696 ************************************ 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.696 00:05:51.696 real 0m1.539s 00:05:51.696 user 0m0.017s 00:05:51.696 sys 0m0.003s 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.696 09:31:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:51.696 09:31:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.696 09:31:45 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.696 09:31:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:51.696 09:31:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.696 09:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.696 ************************************ 00:05:51.696 START TEST accel_decomp_mthread 00:05:51.696 ************************************ 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:51.696 09:31:45 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:51.696 [2024-07-15 09:31:45.896553] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:51.696 [2024-07-15 09:31:45.896644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62092 ] 00:05:51.696 [2024-07-15 09:31:46.030750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.696 [2024-07-15 09:31:46.147464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.954 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.954 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.954 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.954 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.954 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 09:31:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.348 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.349 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.349 09:31:47 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.349 00:05:53.349 real 0m1.506s 00:05:53.349 user 0m1.299s 00:05:53.349 sys 0m0.113s 00:05:53.349 09:31:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.349 09:31:47 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:53.349 ************************************ 00:05:53.349 END TEST accel_decomp_mthread 00:05:53.349 ************************************ 00:05:53.349 09:31:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.349 09:31:47 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.349 09:31:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:53.349 09:31:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.349 09:31:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.349 ************************************ 00:05:53.349 START TEST accel_decomp_full_mthread 00:05:53.349 ************************************ 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:53.349 [2024-07-15 09:31:47.453695] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:53.349 [2024-07-15 09:31:47.453787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62132 ] 00:05:53.349 [2024-07-15 09:31:47.589423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.349 [2024-07-15 09:31:47.707416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.349 09:31:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.723 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.724 00:05:54.724 real 0m1.563s 00:05:54.724 user 0m1.340s 00:05:54.724 sys 0m0.123s 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.724 ************************************ 00:05:54.724 END TEST accel_decomp_full_mthread 00:05:54.724 ************************************ 00:05:54.724 09:31:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:54.724 09:31:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.724 09:31:49 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:54.724 09:31:49 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:54.724 09:31:49 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:54.724 09:31:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.724 09:31:49 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:54.724 09:31:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.724 09:31:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.724 09:31:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.724 09:31:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.724 09:31:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.724 09:31:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.724 09:31:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:54.724 09:31:49 accel -- accel/accel.sh@41 -- # jq -r . 00:05:54.724 ************************************ 00:05:54.724 START TEST accel_dif_functional_tests 00:05:54.724 ************************************ 00:05:54.724 09:31:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:54.724 [2024-07-15 09:31:49.105730] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:54.724 [2024-07-15 09:31:49.105833] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62168 ] 00:05:54.982 [2024-07-15 09:31:49.240190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.982 [2024-07-15 09:31:49.368612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.982 [2024-07-15 09:31:49.368749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.982 [2024-07-15 09:31:49.368752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.982 [2024-07-15 09:31:49.426013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.241 00:05:55.241 00:05:55.241 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.241 http://cunit.sourceforge.net/ 00:05:55.241 00:05:55.241 00:05:55.241 Suite: accel_dif 00:05:55.241 Test: verify: DIF generated, GUARD check ...passed 00:05:55.241 Test: verify: DIF generated, APPTAG check ...passed 00:05:55.241 Test: verify: DIF generated, REFTAG check ...passed 00:05:55.241 Test: verify: DIF not generated, GUARD check ...[2024-07-15 09:31:49.469813] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:55.241 passed 00:05:55.241 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 09:31:49.470057] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:55.241 passed 00:05:55.241 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 09:31:49.470147] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:55.241 passed 00:05:55.241 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:55.241 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:55.241 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 09:31:49.470259] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:55.241 passed 00:05:55.241 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:55.241 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:55.241 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 09:31:49.470604] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:55.241 passed 00:05:55.241 Test: verify copy: DIF generated, GUARD check ...passed 00:05:55.241 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:55.241 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:55.241 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:55.241 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 09:31:49.470997] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:55.241 [2024-07-15 09:31:49.471048] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:55.241 passed 00:05:55.241 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 09:31:49.471229] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:55.241 passed 00:05:55.241 Test: generate copy: DIF generated, GUARD check ...passed 00:05:55.241 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:55.241 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:55.241 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:55.241 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:55.241 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:55.241 Test: generate copy: iovecs-len validate ...[2024-07-15 09:31:49.471857] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:55.241 passed 00:05:55.241 Test: generate copy: buffer alignment validate ...passed 00:05:55.241 00:05:55.241 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.241 suites 1 1 n/a 0 0 00:05:55.241 tests 26 26 26 0 0 00:05:55.241 asserts 115 115 115 0 n/a 00:05:55.241 00:05:55.241 Elapsed time = 0.007 seconds 00:05:55.500 ************************************ 00:05:55.500 END TEST accel_dif_functional_tests 00:05:55.500 ************************************ 00:05:55.500 00:05:55.500 real 0m0.658s 00:05:55.500 user 0m0.896s 00:05:55.500 sys 0m0.158s 00:05:55.500 09:31:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.500 09:31:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:55.500 09:31:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.500 ************************************ 00:05:55.500 END TEST accel 00:05:55.500 ************************************ 00:05:55.500 00:05:55.500 real 0m35.162s 00:05:55.500 user 0m36.853s 00:05:55.500 sys 0m4.052s 00:05:55.500 09:31:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.500 09:31:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.500 09:31:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.500 09:31:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:55.500 09:31:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.500 09:31:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.500 09:31:49 -- common/autotest_common.sh@10 -- # set +x 00:05:55.500 ************************************ 00:05:55.500 START TEST accel_rpc 00:05:55.500 ************************************ 00:05:55.500 09:31:49 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:55.500 * Looking for test storage... 00:05:55.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:55.500 09:31:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.500 09:31:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62232 00:05:55.500 09:31:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:55.500 09:31:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62232 00:05:55.500 09:31:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62232 ']' 00:05:55.500 09:31:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.500 09:31:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.500 09:31:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.500 09:31:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.500 09:31:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.758 [2024-07-15 09:31:49.969329] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:55.758 [2024-07-15 09:31:49.970339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:05:55.758 [2024-07-15 09:31:50.114081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.016 [2024-07-15 09:31:50.243123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.582 09:31:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.582 09:31:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.582 09:31:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:56.582 09:31:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:56.582 09:31:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:56.582 09:31:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:56.582 09:31:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:56.582 09:31:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.582 09:31:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.582 09:31:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.582 ************************************ 00:05:56.582 START TEST accel_assign_opcode 00:05:56.582 ************************************ 00:05:56.582 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.583 [2024-07-15 09:31:50.979752] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.583 [2024-07-15 09:31:50.987736] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.583 09:31:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.841 [2024-07-15 09:31:51.052599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.841 software 00:05:56.841 00:05:56.841 real 0m0.313s 00:05:56.841 user 0m0.053s 00:05:56.841 sys 0m0.013s 00:05:56.841 ************************************ 00:05:56.841 END TEST accel_assign_opcode 00:05:56.841 ************************************ 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.841 09:31:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:57.099 09:31:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62232 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62232 ']' 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62232 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62232 00:05:57.099 killing process with pid 62232 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62232' 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@967 -- # kill 62232 00:05:57.099 09:31:51 accel_rpc -- common/autotest_common.sh@972 -- # wait 62232 00:05:57.357 ************************************ 00:05:57.357 END TEST accel_rpc 00:05:57.357 ************************************ 00:05:57.357 00:05:57.357 real 0m1.959s 00:05:57.357 user 0m2.053s 00:05:57.357 sys 0m0.458s 00:05:57.357 09:31:51 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.357 09:31:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.357 09:31:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.357 09:31:51 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:57.357 09:31:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.357 09:31:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.357 09:31:51 -- common/autotest_common.sh@10 -- # set +x 00:05:57.615 ************************************ 00:05:57.615 START TEST app_cmdline 00:05:57.615 ************************************ 00:05:57.615 09:31:51 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:57.615 * Looking for test storage... 00:05:57.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:57.615 09:31:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:57.615 09:31:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62327 00:05:57.615 09:31:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62327 00:05:57.615 09:31:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:57.615 09:31:51 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62327 ']' 00:05:57.615 09:31:51 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.615 09:31:51 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.615 09:31:51 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.615 09:31:51 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.615 09:31:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.615 [2024-07-15 09:31:51.976784] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:57.615 [2024-07-15 09:31:51.976896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62327 ] 00:05:57.872 [2024-07-15 09:31:52.113871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.872 [2024-07-15 09:31:52.239105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.872 [2024-07-15 09:31:52.296174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:58.807 09:31:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.807 09:31:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:58.807 09:31:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:58.807 { 00:05:58.807 "version": "SPDK v24.09-pre git sha1 e7cce062d", 00:05:58.807 "fields": { 00:05:58.807 "major": 24, 00:05:58.807 "minor": 9, 00:05:58.807 "patch": 0, 00:05:58.807 "suffix": "-pre", 00:05:58.807 "commit": "e7cce062d" 00:05:58.807 } 00:05:58.807 } 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:59.066 09:31:53 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.066 09:31:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.066 09:31:53 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:59.066 09:31:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:59.067 09:31:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:59.067 09:31:53 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:59.324 request: 00:05:59.324 { 00:05:59.325 "method": "env_dpdk_get_mem_stats", 00:05:59.325 "req_id": 1 00:05:59.325 } 00:05:59.325 Got JSON-RPC error response 00:05:59.325 response: 00:05:59.325 { 00:05:59.325 "code": -32601, 00:05:59.325 "message": "Method not found" 00:05:59.325 } 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.325 09:31:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62327 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62327 ']' 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62327 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62327 00:05:59.325 killing process with pid 62327 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62327' 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@967 -- # kill 62327 00:05:59.325 09:31:53 app_cmdline -- common/autotest_common.sh@972 -- # wait 62327 00:05:59.581 00:05:59.581 real 0m2.209s 00:05:59.581 user 0m2.768s 00:05:59.581 sys 0m0.508s 00:05:59.581 ************************************ 00:05:59.581 END TEST app_cmdline 00:05:59.581 ************************************ 00:05:59.581 09:31:54 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.581 09:31:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:59.840 09:31:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.840 09:31:54 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:59.840 09:31:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.840 09:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.840 09:31:54 -- common/autotest_common.sh@10 -- # set +x 00:05:59.840 ************************************ 00:05:59.840 START TEST version 00:05:59.840 ************************************ 00:05:59.841 09:31:54 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:59.841 * Looking for test storage... 00:05:59.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:59.841 09:31:54 version -- app/version.sh@17 -- # get_header_version major 00:05:59.841 09:31:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # cut -f2 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.841 09:31:54 version -- app/version.sh@17 -- # major=24 00:05:59.841 09:31:54 version -- app/version.sh@18 -- # get_header_version minor 00:05:59.841 09:31:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # cut -f2 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.841 09:31:54 version -- app/version.sh@18 -- # minor=9 00:05:59.841 09:31:54 version -- app/version.sh@19 -- # get_header_version patch 00:05:59.841 09:31:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # cut -f2 00:05:59.841 09:31:54 version -- app/version.sh@19 -- # patch=0 00:05:59.841 09:31:54 version -- app/version.sh@20 -- # get_header_version suffix 00:05:59.841 09:31:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # cut -f2 00:05:59.841 09:31:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:59.841 09:31:54 version -- app/version.sh@20 -- # suffix=-pre 00:05:59.841 09:31:54 version -- app/version.sh@22 -- # version=24.9 00:05:59.841 09:31:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:59.841 09:31:54 version -- app/version.sh@28 -- # version=24.9rc0 00:05:59.841 09:31:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:59.841 09:31:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:59.841 09:31:54 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:59.841 09:31:54 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:59.841 00:05:59.841 real 0m0.154s 00:05:59.841 user 0m0.088s 00:05:59.841 sys 0m0.096s 00:05:59.841 ************************************ 00:05:59.841 END TEST version 00:05:59.841 ************************************ 00:05:59.841 09:31:54 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.841 09:31:54 version -- common/autotest_common.sh@10 -- # set +x 00:05:59.841 09:31:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.841 09:31:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:59.841 09:31:54 -- spdk/autotest.sh@198 -- # uname -s 00:05:59.841 09:31:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:59.841 09:31:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:59.841 09:31:54 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:05:59.841 09:31:54 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:05:59.841 09:31:54 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:59.841 09:31:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.841 09:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.841 09:31:54 -- common/autotest_common.sh@10 -- # set +x 00:05:59.841 ************************************ 00:05:59.841 START TEST spdk_dd 00:05:59.841 ************************************ 00:05:59.841 09:31:54 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:00.099 * Looking for test storage... 00:06:00.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:00.099 09:31:54 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.099 09:31:54 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.099 09:31:54 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.099 09:31:54 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.099 09:31:54 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.099 09:31:54 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.099 09:31:54 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.099 09:31:54 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:00.100 09:31:54 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.100 09:31:54 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:00.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.358 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:00.358 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:00.358 09:31:54 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:00.358 09:31:54 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:00.359 09:31:54 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:00.359 09:31:54 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:00.359 09:31:54 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:00.359 09:31:54 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:00.359 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.359 09:31:54 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:00.359 09:31:54 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:00.619 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:00.620 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:00.621 * spdk_dd linked to liburing 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:00.621 09:31:54 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:00.621 09:31:54 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:00.621 09:31:54 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:00.621 09:31:54 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:00.621 09:31:54 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:00.621 09:31:54 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.621 09:31:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:00.621 ************************************ 00:06:00.621 START TEST spdk_dd_basic_rw 00:06:00.621 ************************************ 00:06:00.621 09:31:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:00.621 * Looking for test storage... 00:06:00.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:00.621 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.621 09:31:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.621 09:31:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.621 09:31:54 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.621 09:31:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:00.622 09:31:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.894 ************************************ 00:06:00.894 START TEST dd_bs_lt_native_bs 00:06:00.894 ************************************ 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.894 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:00.895 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:00.895 { 00:06:00.895 "subsystems": [ 00:06:00.895 { 00:06:00.895 "subsystem": "bdev", 00:06:00.895 "config": [ 00:06:00.895 { 00:06:00.895 "params": { 00:06:00.895 "trtype": "pcie", 00:06:00.895 "traddr": "0000:00:10.0", 00:06:00.895 "name": "Nvme0" 00:06:00.895 }, 00:06:00.895 "method": "bdev_nvme_attach_controller" 00:06:00.895 }, 00:06:00.895 { 00:06:00.895 "method": "bdev_wait_for_examine" 00:06:00.895 } 00:06:00.895 ] 00:06:00.895 } 00:06:00.895 ] 00:06:00.895 } 00:06:00.895 [2024-07-15 09:31:55.236577] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:00.895 [2024-07-15 09:31:55.236943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62653 ] 00:06:01.175 [2024-07-15 09:31:55.379374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.175 [2024-07-15 09:31:55.509978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.175 [2024-07-15 09:31:55.573015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:01.433 [2024-07-15 09:31:55.685210] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:01.433 [2024-07-15 09:31:55.685282] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.433 [2024-07-15 09:31:55.822204] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.691 00:06:01.691 real 0m0.765s 00:06:01.691 user 0m0.519s 00:06:01.691 sys 0m0.195s 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:01.691 ************************************ 00:06:01.691 END TEST dd_bs_lt_native_bs 00:06:01.691 ************************************ 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.691 ************************************ 00:06:01.691 START TEST dd_rw 00:06:01.691 ************************************ 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:01.691 09:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.258 09:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:02.258 09:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:02.258 09:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.258 09:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.258 { 00:06:02.258 "subsystems": [ 00:06:02.258 { 00:06:02.258 "subsystem": "bdev", 00:06:02.258 "config": [ 00:06:02.258 { 00:06:02.258 "params": { 00:06:02.258 "trtype": "pcie", 00:06:02.258 "traddr": "0000:00:10.0", 00:06:02.258 "name": "Nvme0" 00:06:02.258 }, 00:06:02.258 "method": "bdev_nvme_attach_controller" 00:06:02.258 }, 00:06:02.258 { 00:06:02.258 "method": "bdev_wait_for_examine" 00:06:02.258 } 00:06:02.258 ] 00:06:02.258 } 00:06:02.258 ] 00:06:02.258 } 00:06:02.515 [2024-07-15 09:31:56.732369] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:02.515 [2024-07-15 09:31:56.732511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62684 ] 00:06:02.515 [2024-07-15 09:31:56.875931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.773 [2024-07-15 09:31:57.005548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.773 [2024-07-15 09:31:57.064450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.030  Copying: 60/60 [kB] (average 19 MBps) 00:06:03.030 00:06:03.030 09:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:03.030 09:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:03.030 09:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.030 09:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.030 [2024-07-15 09:31:57.474888] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:03.030 [2024-07-15 09:31:57.475027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62703 ] 00:06:03.030 { 00:06:03.030 "subsystems": [ 00:06:03.030 { 00:06:03.030 "subsystem": "bdev", 00:06:03.030 "config": [ 00:06:03.030 { 00:06:03.030 "params": { 00:06:03.030 "trtype": "pcie", 00:06:03.030 "traddr": "0000:00:10.0", 00:06:03.030 "name": "Nvme0" 00:06:03.030 }, 00:06:03.030 "method": "bdev_nvme_attach_controller" 00:06:03.030 }, 00:06:03.030 { 00:06:03.030 "method": "bdev_wait_for_examine" 00:06:03.031 } 00:06:03.031 ] 00:06:03.031 } 00:06:03.031 ] 00:06:03.031 } 00:06:03.288 [2024-07-15 09:31:57.614096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.288 [2024-07-15 09:31:57.738504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.546 [2024-07-15 09:31:57.795959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:03.804  Copying: 60/60 [kB] (average 14 MBps) 00:06:03.804 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.804 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.804 [2024-07-15 09:31:58.206289] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:03.804 [2024-07-15 09:31:58.206682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62718 ] 00:06:03.804 { 00:06:03.804 "subsystems": [ 00:06:03.804 { 00:06:03.804 "subsystem": "bdev", 00:06:03.804 "config": [ 00:06:03.804 { 00:06:03.804 "params": { 00:06:03.804 "trtype": "pcie", 00:06:03.804 "traddr": "0000:00:10.0", 00:06:03.804 "name": "Nvme0" 00:06:03.804 }, 00:06:03.804 "method": "bdev_nvme_attach_controller" 00:06:03.804 }, 00:06:03.804 { 00:06:03.804 "method": "bdev_wait_for_examine" 00:06:03.804 } 00:06:03.804 ] 00:06:03.804 } 00:06:03.804 ] 00:06:03.804 } 00:06:04.062 [2024-07-15 09:31:58.338422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.062 [2024-07-15 09:31:58.460531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.062 [2024-07-15 09:31:58.518596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:04.577  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:04.577 00:06:04.577 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:04.577 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:04.577 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:04.577 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:04.577 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:04.577 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:04.577 09:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.149 09:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:05.149 09:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:05.149 09:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.149 09:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.149 { 00:06:05.149 "subsystems": [ 00:06:05.149 { 00:06:05.149 "subsystem": "bdev", 00:06:05.149 "config": [ 00:06:05.149 { 00:06:05.149 "params": { 00:06:05.149 "trtype": "pcie", 00:06:05.149 "traddr": "0000:00:10.0", 00:06:05.149 "name": "Nvme0" 00:06:05.149 }, 00:06:05.149 "method": "bdev_nvme_attach_controller" 00:06:05.149 }, 00:06:05.149 { 00:06:05.149 "method": "bdev_wait_for_examine" 00:06:05.149 } 00:06:05.149 ] 00:06:05.149 } 00:06:05.149 ] 00:06:05.149 } 00:06:05.149 [2024-07-15 09:31:59.602119] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:05.149 [2024-07-15 09:31:59.602327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62743 ] 00:06:05.418 [2024-07-15 09:31:59.758531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.676 [2024-07-15 09:31:59.890176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.676 [2024-07-15 09:31:59.951609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.934  Copying: 60/60 [kB] (average 58 MBps) 00:06:05.934 00:06:05.934 09:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:05.934 09:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:05.934 09:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.934 09:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.934 [2024-07-15 09:32:00.349008] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:05.934 [2024-07-15 09:32:00.349872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62762 ] 00:06:05.934 { 00:06:05.934 "subsystems": [ 00:06:05.934 { 00:06:05.934 "subsystem": "bdev", 00:06:05.934 "config": [ 00:06:05.934 { 00:06:05.934 "params": { 00:06:05.934 "trtype": "pcie", 00:06:05.934 "traddr": "0000:00:10.0", 00:06:05.934 "name": "Nvme0" 00:06:05.934 }, 00:06:05.934 "method": "bdev_nvme_attach_controller" 00:06:05.934 }, 00:06:05.934 { 00:06:05.934 "method": "bdev_wait_for_examine" 00:06:05.934 } 00:06:05.934 ] 00:06:05.934 } 00:06:05.934 ] 00:06:05.934 } 00:06:06.192 [2024-07-15 09:32:00.485089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.192 [2024-07-15 09:32:00.602039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.450 [2024-07-15 09:32:00.661235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.707  Copying: 60/60 [kB] (average 58 MBps) 00:06:06.707 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:06.707 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.707 { 00:06:06.707 "subsystems": [ 00:06:06.707 { 00:06:06.707 "subsystem": "bdev", 00:06:06.707 "config": [ 00:06:06.707 { 00:06:06.707 "params": { 00:06:06.707 "trtype": "pcie", 00:06:06.707 "traddr": "0000:00:10.0", 00:06:06.707 "name": "Nvme0" 00:06:06.707 }, 00:06:06.707 "method": "bdev_nvme_attach_controller" 00:06:06.707 }, 00:06:06.707 { 00:06:06.707 "method": "bdev_wait_for_examine" 00:06:06.707 } 00:06:06.707 ] 00:06:06.707 } 00:06:06.707 ] 00:06:06.707 } 00:06:06.707 [2024-07-15 09:32:01.077145] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:06.707 [2024-07-15 09:32:01.077888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62772 ] 00:06:06.964 [2024-07-15 09:32:01.217071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.964 [2024-07-15 09:32:01.340822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.965 [2024-07-15 09:32:01.402512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.479  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:07.479 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:07.479 09:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.053 09:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:08.053 09:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:08.053 09:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.053 09:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.053 [2024-07-15 09:32:02.396490] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:08.053 [2024-07-15 09:32:02.396771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62802 ] 00:06:08.053 { 00:06:08.053 "subsystems": [ 00:06:08.053 { 00:06:08.053 "subsystem": "bdev", 00:06:08.053 "config": [ 00:06:08.053 { 00:06:08.053 "params": { 00:06:08.053 "trtype": "pcie", 00:06:08.053 "traddr": "0000:00:10.0", 00:06:08.053 "name": "Nvme0" 00:06:08.053 }, 00:06:08.053 "method": "bdev_nvme_attach_controller" 00:06:08.053 }, 00:06:08.053 { 00:06:08.053 "method": "bdev_wait_for_examine" 00:06:08.053 } 00:06:08.053 ] 00:06:08.053 } 00:06:08.053 ] 00:06:08.053 } 00:06:08.311 [2024-07-15 09:32:02.532769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.311 [2024-07-15 09:32:02.632576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.311 [2024-07-15 09:32:02.690992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.569  Copying: 56/56 [kB] (average 54 MBps) 00:06:08.569 00:06:08.826 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:08.826 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:08.826 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:08.826 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.826 { 00:06:08.826 "subsystems": [ 00:06:08.826 { 00:06:08.826 "subsystem": "bdev", 00:06:08.826 "config": [ 00:06:08.826 { 00:06:08.826 "params": { 00:06:08.826 "trtype": "pcie", 00:06:08.826 "traddr": "0000:00:10.0", 00:06:08.826 "name": "Nvme0" 00:06:08.826 }, 00:06:08.826 "method": "bdev_nvme_attach_controller" 00:06:08.826 }, 00:06:08.826 { 00:06:08.826 "method": "bdev_wait_for_examine" 00:06:08.826 } 00:06:08.826 ] 00:06:08.826 } 00:06:08.826 ] 00:06:08.826 } 00:06:08.826 [2024-07-15 09:32:03.093526] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:08.826 [2024-07-15 09:32:03.093621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62810 ] 00:06:08.826 [2024-07-15 09:32:03.233449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.083 [2024-07-15 09:32:03.362997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.083 [2024-07-15 09:32:03.422059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.341  Copying: 56/56 [kB] (average 27 MBps) 00:06:09.341 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:09.341 09:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:09.599 { 00:06:09.599 "subsystems": [ 00:06:09.599 { 00:06:09.599 "subsystem": "bdev", 00:06:09.599 "config": [ 00:06:09.599 { 00:06:09.599 "params": { 00:06:09.599 "trtype": "pcie", 00:06:09.599 "traddr": "0000:00:10.0", 00:06:09.599 "name": "Nvme0" 00:06:09.599 }, 00:06:09.599 "method": "bdev_nvme_attach_controller" 00:06:09.599 }, 00:06:09.599 { 00:06:09.599 "method": "bdev_wait_for_examine" 00:06:09.599 } 00:06:09.599 ] 00:06:09.599 } 00:06:09.599 ] 00:06:09.599 } 00:06:09.599 [2024-07-15 09:32:03.847720] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:09.599 [2024-07-15 09:32:03.847831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62831 ] 00:06:09.599 [2024-07-15 09:32:03.992587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.857 [2024-07-15 09:32:04.110594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.857 [2024-07-15 09:32:04.169073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.115  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:10.115 00:06:10.115 09:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:10.115 09:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:10.115 09:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:10.115 09:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:10.115 09:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:10.115 09:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:10.115 09:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.705 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:10.705 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:10.705 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:10.705 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:10.961 [2024-07-15 09:32:05.195506] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:10.961 [2024-07-15 09:32:05.195800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62850 ] 00:06:10.961 { 00:06:10.961 "subsystems": [ 00:06:10.961 { 00:06:10.961 "subsystem": "bdev", 00:06:10.961 "config": [ 00:06:10.961 { 00:06:10.961 "params": { 00:06:10.961 "trtype": "pcie", 00:06:10.961 "traddr": "0000:00:10.0", 00:06:10.961 "name": "Nvme0" 00:06:10.961 }, 00:06:10.961 "method": "bdev_nvme_attach_controller" 00:06:10.961 }, 00:06:10.961 { 00:06:10.961 "method": "bdev_wait_for_examine" 00:06:10.961 } 00:06:10.961 ] 00:06:10.961 } 00:06:10.961 ] 00:06:10.961 } 00:06:10.961 [2024-07-15 09:32:05.330979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.217 [2024-07-15 09:32:05.467525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.217 [2024-07-15 09:32:05.525350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:11.474  Copying: 56/56 [kB] (average 54 MBps) 00:06:11.474 00:06:11.474 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:11.474 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:11.474 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:11.474 09:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:11.474 [2024-07-15 09:32:05.928973] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:11.474 [2024-07-15 09:32:05.929097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62869 ] 00:06:11.474 { 00:06:11.474 "subsystems": [ 00:06:11.474 { 00:06:11.474 "subsystem": "bdev", 00:06:11.474 "config": [ 00:06:11.474 { 00:06:11.474 "params": { 00:06:11.474 "trtype": "pcie", 00:06:11.474 "traddr": "0000:00:10.0", 00:06:11.474 "name": "Nvme0" 00:06:11.474 }, 00:06:11.474 "method": "bdev_nvme_attach_controller" 00:06:11.474 }, 00:06:11.474 { 00:06:11.474 "method": "bdev_wait_for_examine" 00:06:11.474 } 00:06:11.474 ] 00:06:11.474 } 00:06:11.474 ] 00:06:11.474 } 00:06:11.732 [2024-07-15 09:32:06.066752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.732 [2024-07-15 09:32:06.193897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.988 [2024-07-15 09:32:06.254328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.246  Copying: 56/56 [kB] (average 54 MBps) 00:06:12.246 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.246 09:32:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.246 { 00:06:12.246 "subsystems": [ 00:06:12.246 { 00:06:12.246 "subsystem": "bdev", 00:06:12.246 "config": [ 00:06:12.246 { 00:06:12.246 "params": { 00:06:12.246 "trtype": "pcie", 00:06:12.246 "traddr": "0000:00:10.0", 00:06:12.246 "name": "Nvme0" 00:06:12.246 }, 00:06:12.246 "method": "bdev_nvme_attach_controller" 00:06:12.246 }, 00:06:12.246 { 00:06:12.246 "method": "bdev_wait_for_examine" 00:06:12.246 } 00:06:12.246 ] 00:06:12.246 } 00:06:12.246 ] 00:06:12.246 } 00:06:12.246 [2024-07-15 09:32:06.650126] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:12.246 [2024-07-15 09:32:06.650208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62890 ] 00:06:12.504 [2024-07-15 09:32:06.787060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.504 [2024-07-15 09:32:06.910869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.762 [2024-07-15 09:32:06.974439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.019  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:13.019 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:13.019 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:13.585 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:13.585 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.585 09:32:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 [2024-07-15 09:32:07.905310] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:13.585 [2024-07-15 09:32:07.905667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:06:13.585 { 00:06:13.585 "subsystems": [ 00:06:13.585 { 00:06:13.585 "subsystem": "bdev", 00:06:13.585 "config": [ 00:06:13.585 { 00:06:13.585 "params": { 00:06:13.585 "trtype": "pcie", 00:06:13.585 "traddr": "0000:00:10.0", 00:06:13.585 "name": "Nvme0" 00:06:13.585 }, 00:06:13.585 "method": "bdev_nvme_attach_controller" 00:06:13.585 }, 00:06:13.585 { 00:06:13.585 "method": "bdev_wait_for_examine" 00:06:13.585 } 00:06:13.585 ] 00:06:13.585 } 00:06:13.585 ] 00:06:13.585 } 00:06:13.585 [2024-07-15 09:32:08.036836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.842 [2024-07-15 09:32:08.185872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.842 [2024-07-15 09:32:08.242659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.358  Copying: 48/48 [kB] (average 46 MBps) 00:06:14.358 00:06:14.358 09:32:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:14.358 09:32:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:14.358 09:32:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.358 09:32:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.358 [2024-07-15 09:32:08.667065] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:14.358 [2024-07-15 09:32:08.667153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62929 ] 00:06:14.358 { 00:06:14.358 "subsystems": [ 00:06:14.358 { 00:06:14.358 "subsystem": "bdev", 00:06:14.358 "config": [ 00:06:14.358 { 00:06:14.358 "params": { 00:06:14.358 "trtype": "pcie", 00:06:14.358 "traddr": "0000:00:10.0", 00:06:14.358 "name": "Nvme0" 00:06:14.358 }, 00:06:14.358 "method": "bdev_nvme_attach_controller" 00:06:14.358 }, 00:06:14.358 { 00:06:14.358 "method": "bdev_wait_for_examine" 00:06:14.358 } 00:06:14.358 ] 00:06:14.358 } 00:06:14.358 ] 00:06:14.358 } 00:06:14.358 [2024-07-15 09:32:08.800235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.617 [2024-07-15 09:32:08.923531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.617 [2024-07-15 09:32:08.980552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.875  Copying: 48/48 [kB] (average 46 MBps) 00:06:14.875 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.875 09:32:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.133 { 00:06:15.133 "subsystems": [ 00:06:15.133 { 00:06:15.133 "subsystem": "bdev", 00:06:15.133 "config": [ 00:06:15.133 { 00:06:15.133 "params": { 00:06:15.133 "trtype": "pcie", 00:06:15.133 "traddr": "0000:00:10.0", 00:06:15.133 "name": "Nvme0" 00:06:15.133 }, 00:06:15.133 "method": "bdev_nvme_attach_controller" 00:06:15.133 }, 00:06:15.133 { 00:06:15.133 "method": "bdev_wait_for_examine" 00:06:15.133 } 00:06:15.133 ] 00:06:15.133 } 00:06:15.133 ] 00:06:15.133 } 00:06:15.133 [2024-07-15 09:32:09.410279] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:15.133 [2024-07-15 09:32:09.410649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62939 ] 00:06:15.133 [2024-07-15 09:32:09.557018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.391 [2024-07-15 09:32:09.676833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.391 [2024-07-15 09:32:09.732327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.649  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:15.649 00:06:15.649 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:15.649 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:15.649 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:15.649 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:15.649 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:15.649 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:15.649 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.215 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:16.215 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:16.215 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.215 09:32:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.473 [2024-07-15 09:32:10.685935] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:16.473 [2024-07-15 09:32:10.686075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62968 ] 00:06:16.473 { 00:06:16.473 "subsystems": [ 00:06:16.473 { 00:06:16.473 "subsystem": "bdev", 00:06:16.473 "config": [ 00:06:16.473 { 00:06:16.473 "params": { 00:06:16.473 "trtype": "pcie", 00:06:16.473 "traddr": "0000:00:10.0", 00:06:16.473 "name": "Nvme0" 00:06:16.473 }, 00:06:16.473 "method": "bdev_nvme_attach_controller" 00:06:16.473 }, 00:06:16.473 { 00:06:16.473 "method": "bdev_wait_for_examine" 00:06:16.473 } 00:06:16.473 ] 00:06:16.473 } 00:06:16.473 ] 00:06:16.473 } 00:06:16.473 [2024-07-15 09:32:10.823718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.731 [2024-07-15 09:32:10.946775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.731 [2024-07-15 09:32:11.004431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.049  Copying: 48/48 [kB] (average 46 MBps) 00:06:17.049 00:06:17.049 09:32:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:17.049 09:32:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:17.049 09:32:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.049 09:32:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.049 { 00:06:17.049 "subsystems": [ 00:06:17.049 { 00:06:17.049 "subsystem": "bdev", 00:06:17.049 "config": [ 00:06:17.049 { 00:06:17.049 "params": { 00:06:17.049 "trtype": "pcie", 00:06:17.049 "traddr": "0000:00:10.0", 00:06:17.049 "name": "Nvme0" 00:06:17.049 }, 00:06:17.049 "method": "bdev_nvme_attach_controller" 00:06:17.049 }, 00:06:17.049 { 00:06:17.049 "method": "bdev_wait_for_examine" 00:06:17.049 } 00:06:17.049 ] 00:06:17.049 } 00:06:17.049 ] 00:06:17.049 } 00:06:17.049 [2024-07-15 09:32:11.401552] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:17.049 [2024-07-15 09:32:11.401649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62977 ] 00:06:17.310 [2024-07-15 09:32:11.544218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.310 [2024-07-15 09:32:11.663064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.310 [2024-07-15 09:32:11.721694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.827  Copying: 48/48 [kB] (average 46 MBps) 00:06:17.827 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.827 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.827 [2024-07-15 09:32:12.103348] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:17.827 [2024-07-15 09:32:12.103425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62999 ] 00:06:17.827 { 00:06:17.827 "subsystems": [ 00:06:17.827 { 00:06:17.827 "subsystem": "bdev", 00:06:17.827 "config": [ 00:06:17.827 { 00:06:17.827 "params": { 00:06:17.827 "trtype": "pcie", 00:06:17.827 "traddr": "0000:00:10.0", 00:06:17.827 "name": "Nvme0" 00:06:17.827 }, 00:06:17.827 "method": "bdev_nvme_attach_controller" 00:06:17.827 }, 00:06:17.827 { 00:06:17.827 "method": "bdev_wait_for_examine" 00:06:17.827 } 00:06:17.827 ] 00:06:17.827 } 00:06:17.827 ] 00:06:17.827 } 00:06:17.827 [2024-07-15 09:32:12.234832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.086 [2024-07-15 09:32:12.336701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.086 [2024-07-15 09:32:12.392298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.344  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:18.344 00:06:18.344 00:06:18.344 real 0m16.736s 00:06:18.344 user 0m12.483s 00:06:18.344 sys 0m5.769s 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.344 ************************************ 00:06:18.344 END TEST dd_rw 00:06:18.344 ************************************ 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.344 ************************************ 00:06:18.344 START TEST dd_rw_offset 00:06:18.344 ************************************ 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:18.344 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:18.604 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:18.605 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=eyn5w41uks0vxyp53odyxfoj7r6nt1bpnitxridrz5l6q7ofa5nnox3c98zw6qdws995jtsb69h7xg94a3k524va2httww9u0kreek2j1wx2542sbm1zehbkey5u20owwut3yftbanmnsmzhvi6end9x3pxlmhhl97d4p7wwe4sdxh2f2fcrvn5ioi3zfjxkvykfeqshfcz7kz5vzbrmoohxzz5le2ecqt6txl2c2dmwp9mgfje8bqiw7wcd5mi0a97trrd79yfrpbwok9pppkjjb6j7ma4vrmd8puk8vwl3cfgu4zki2r9dl3a9s7vwu9vpsr936m69hnj811j2yc9e1k0ttbru9ut80s7xikvnhz6tn8j7bjp49qd0gdirt1poeef9h911ebe6mi3xt3uowd8fotgvt196s77reffu4bixo2n1oj49kb76llvwk0oq8ss2uw9mqiplqefdrd1bvr0z0mfnv314vu1ieuv5af7wyqj2xz8cjmr4d38bpoxt6mmj4lc7b003cztt5h2cb0ma9w84fai4f33v8s7h285fz87euwvhqqnuopkmywugxok2v3hg1o6rrdvl2lljl8w3wulg1ldk4s18x6hjir1wwtazd3ixffwr4phgwtb92p6b7ddslrtu8qhtclvkr70utzhrp4cg8cji4qg3svqy9b9ts1wtrwizry1quazt5y9amdz6u9vrg7fsyp7y5dsqe5s1jqqh6zutchxx722cw641ybllqi31aoxnptev7kc2zsmcfdbuz8g2cp2uvmugctoaqy1mc38cx8cgr58z60wnjav964bfn4f4rf1wmmknvez45403k5b0hl5l0rfdy08wou89953nfxnl2j9vysvfcvu2iiolx1wuh1byqncz6e3azhd7hkjborte3p1cfyp7ehk74gitu4r01hihhgcxr5ndkszw0zipm6k1ih58ag5oqw23sp4so4iby72y2memc0o4ofbpn81fhxmcro3x51cvpwrvrenbhpjg50nj80otchzvs3gkh7g68thke9kq47p673oyuqovwkvic2wmh4qhea6wibm0qyyflbi7q04icjv43son37ptex7pkwb25f96mww0ayqodrio78ka75ydwaj16csfa2l0qwdr7p08cyybyoysh76qnkl56xnisophff7upspa9957b4xrp620v1xtmhg56nuct5pgwrs4ghao1bpgehxqdd2yt45airc81bpoi9viw9nzpdgo29yuzjm92h8iyn8h4vtfgs9b3rz5wn4pqfa9m32u69d01p65s4wgslobj9faukq7nlpmjqmgzgj45fk9ltc0safebepr2a8qvhpkgxqon0275xlzmgaorcfeso55k0lf6hd2a4gf1kuc6865prnu89gfv7ijx03q91iybc2bepljraqbm535z13tptybyd991j3l4p4zpq40p9uffoe6gdkqf7ketugqqpzxrz2csgo99tgf81da6u58l7ltnisgdm9cft1zwuqszkkz9j1nhe9dwr7xl6rurxoy45aol89vz310163khbh4xq5cgi3z5t7waz21580utwijrwhiaqktd1muv1l2iehkaj9284jgrs43o2cxtfbk8u4vhbnlo1ujmv4fjvii2vzhcw6ukgn7znoyq4av3tvxs6frovfer6p6le1kbg7tqopdcr6vf5cgnyb2oezd1qknpl1b4cnn1liii11xrlxq4l3rmeadu3z169oho0b0oamei9lfoyspibhouh0ouzvjjggyrose2txnatgnwihxnyjfnifpasrufj5yk4nle8ihssf2qqzcy73wkb8jigbbmijjybfgm6nznkssjkqp3fd9gzu2wyz719vax790wiwjvjmaudnu869jrovghal09vlufwi5t006svb98k8myoklv52r942equiqhdxvbcg144kspoxict0972l1o0l08zqu2wbnv6q0rdf7h3naqiwyyqvdo3l25a74mfpl17hbc5yguzrawhzdybya2ffkti7mstigiyn08aeccol2bcmck3m0fkmsh5dkv0gve37888edjrtrljyxbxig5tbbcwca3eidatn9ee229iwnqbz3e120zrtgiy3v4etyelhsb3h17p87xchy4fsv0i22ab3xlq0y7wfeihfipxzz9nayrxmyj1amro413c3akzsgrndvoq7itvguozs6jer3fv2trm1l058vfujsfb7fvqzpzlhjliho3kg75wkhci92i6cay4qvdfmq8quxlf1koxy2lz1e42wtj3wogszaz2yqzzcmfqjrnknanfndrieu86utqsg6jufln302himqwbpmi5es10quapgtopzqogz7dpymzcy4afp28lp41070ahw2hmv8rummgkj71s33virc2sxso62vhwvzv22odegi87p5zqn7k2dfki224d28lunbib7dqv6bc6dpefm9jq8ae4src4cpyj7gy1jmt2nu9h9pwbg1qqe6elkwxfo5z7h9a02i6weibszlx6wib4inh2jorvt3g297ccuhev4sgogty8txebct21jcssywbbdgtvturx375j8n4hsf8xy57n1oocghaebadoen0t0ek3q4gj6ghl9dvshln2e7mevfgr72cz12p1m8j06wmjql3je8885bdvl14va8e1dth8batf85khbb39xllb2opfeytats1g7geq4jpahvsv0t95xop3lm1l7v8shtf7oc45ryw9vp63omxxxycidulzky5x5xgymg8bnhl8rjtna986skuo6mss5rieu0vq7z0bcucig7rel6j2yp6zhpyus68pfi449urscqwtxk29e37uux5w1swab23yrim23bsicbolbgd4x7wx1qsetntkp8yttcdqeg769l56mac7frojxnoqc9vo05b4ee1fyt99y0x7ph1oq9idylrlreyc66mswfytnvbcjfsgh9cldhun10up2mzxt6154l7nj8i99qyg1u783n65vjyayir3xxe3xpi4zc6taxttsv50bjwuzbtou74xgvbfdr40jdzbstifpmg682y9h1pa6v3z174yt0mrn92153jtiioh7mjrl3qlnxvhogw0oj2hg4dvacdq5p32gp76ea0zos72nsr3jkvpeo0jvhrtlcxz0h8cnqxd5eehpkw195i3ib1o1mw8e2vqc6dfyv5g287ao63um6v5m78zajhx66l57jcshtkbwsf4kvukgp02hkzrcpgjn83tzaojo01nskzy34xwucxbmymgpt6hgznykdypz5zko2hcb5ywero5q26xdtgo9iwgz66k6zx67oaqeecti9dqr2z13zwu7gojcz581pz2wf68s68wc5vpny31wyqs811q8d0a5n56knhy2j9rymtjxdhgzdk2o9pj21hme6hqnlcbkgms8uccfxx2mvt9dwvchgie9be5e34cevjdrbi4h7xflf48sy5z53godxqsowuj81ngg7gqcbicnn5f3gy9tt101stgv385y2onae69u3oxllye5nbltdofrr6siequ6q0gotvxlqugzr5kwen37256dczuljo4dynv4sh5f1oijz64d99ecsu17p8fnfhocs3su3nazuuoapxg1af6uz1bkzsadsg91zq9ros5rw39a5prghsmo9aki8w1yvfq66j40kk5ey0rrn8vpr2um640ku7vmq7jz5jcinh8ju3l6m0l1aeejvf9ms55dnjwam122bsnowrknrpbo1aatktblyfdaghz3zzyn156xbg9y3tun4in6yqmsj0emq4armdgl63kupd7ntijqy3ffvu51jxp6960181ibv4mwx13rms22fiv9t02uzo3dh4ziqnzbjsvtejpgu5m6cwzu6o7txgowckb8lf6m5nlzaykw2umq7pdvwqu2tkab91bxncs02eot5m6cy1tosx1si0xdzi34izhvxfemb7ky1c5kzuz7ekm7z4pfdehm9uko1vmmfmivjbl8fd8cmqciasnhb9tdebs0csesej97dm0x8 00:06:18.605 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:18.605 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:18.605 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:18.605 09:32:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:18.605 [2024-07-15 09:32:12.874196] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:18.605 [2024-07-15 09:32:12.874304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63035 ] 00:06:18.605 { 00:06:18.605 "subsystems": [ 00:06:18.605 { 00:06:18.605 "subsystem": "bdev", 00:06:18.605 "config": [ 00:06:18.605 { 00:06:18.605 "params": { 00:06:18.605 "trtype": "pcie", 00:06:18.605 "traddr": "0000:00:10.0", 00:06:18.605 "name": "Nvme0" 00:06:18.605 }, 00:06:18.605 "method": "bdev_nvme_attach_controller" 00:06:18.605 }, 00:06:18.605 { 00:06:18.605 "method": "bdev_wait_for_examine" 00:06:18.605 } 00:06:18.605 ] 00:06:18.605 } 00:06:18.605 ] 00:06:18.605 } 00:06:18.605 [2024-07-15 09:32:13.002652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.878 [2024-07-15 09:32:13.095674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.878 [2024-07-15 09:32:13.152956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.135  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:19.135 00:06:19.135 09:32:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:19.135 09:32:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:19.135 09:32:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:19.135 09:32:13 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.135 [2024-07-15 09:32:13.535372] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:19.135 [2024-07-15 09:32:13.535478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63043 ] 00:06:19.135 { 00:06:19.135 "subsystems": [ 00:06:19.135 { 00:06:19.135 "subsystem": "bdev", 00:06:19.135 "config": [ 00:06:19.135 { 00:06:19.135 "params": { 00:06:19.135 "trtype": "pcie", 00:06:19.135 "traddr": "0000:00:10.0", 00:06:19.135 "name": "Nvme0" 00:06:19.135 }, 00:06:19.135 "method": "bdev_nvme_attach_controller" 00:06:19.135 }, 00:06:19.135 { 00:06:19.135 "method": "bdev_wait_for_examine" 00:06:19.135 } 00:06:19.135 ] 00:06:19.135 } 00:06:19.135 ] 00:06:19.135 } 00:06:19.396 [2024-07-15 09:32:13.668491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.396 [2024-07-15 09:32:13.789236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.396 [2024-07-15 09:32:13.847958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.914  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:19.914 00:06:19.914 09:32:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ eyn5w41uks0vxyp53odyxfoj7r6nt1bpnitxridrz5l6q7ofa5nnox3c98zw6qdws995jtsb69h7xg94a3k524va2httww9u0kreek2j1wx2542sbm1zehbkey5u20owwut3yftbanmnsmzhvi6end9x3pxlmhhl97d4p7wwe4sdxh2f2fcrvn5ioi3zfjxkvykfeqshfcz7kz5vzbrmoohxzz5le2ecqt6txl2c2dmwp9mgfje8bqiw7wcd5mi0a97trrd79yfrpbwok9pppkjjb6j7ma4vrmd8puk8vwl3cfgu4zki2r9dl3a9s7vwu9vpsr936m69hnj811j2yc9e1k0ttbru9ut80s7xikvnhz6tn8j7bjp49qd0gdirt1poeef9h911ebe6mi3xt3uowd8fotgvt196s77reffu4bixo2n1oj49kb76llvwk0oq8ss2uw9mqiplqefdrd1bvr0z0mfnv314vu1ieuv5af7wyqj2xz8cjmr4d38bpoxt6mmj4lc7b003cztt5h2cb0ma9w84fai4f33v8s7h285fz87euwvhqqnuopkmywugxok2v3hg1o6rrdvl2lljl8w3wulg1ldk4s18x6hjir1wwtazd3ixffwr4phgwtb92p6b7ddslrtu8qhtclvkr70utzhrp4cg8cji4qg3svqy9b9ts1wtrwizry1quazt5y9amdz6u9vrg7fsyp7y5dsqe5s1jqqh6zutchxx722cw641ybllqi31aoxnptev7kc2zsmcfdbuz8g2cp2uvmugctoaqy1mc38cx8cgr58z60wnjav964bfn4f4rf1wmmknvez45403k5b0hl5l0rfdy08wou89953nfxnl2j9vysvfcvu2iiolx1wuh1byqncz6e3azhd7hkjborte3p1cfyp7ehk74gitu4r01hihhgcxr5ndkszw0zipm6k1ih58ag5oqw23sp4so4iby72y2memc0o4ofbpn81fhxmcro3x51cvpwrvrenbhpjg50nj80otchzvs3gkh7g68thke9kq47p673oyuqovwkvic2wmh4qhea6wibm0qyyflbi7q04icjv43son37ptex7pkwb25f96mww0ayqodrio78ka75ydwaj16csfa2l0qwdr7p08cyybyoysh76qnkl56xnisophff7upspa9957b4xrp620v1xtmhg56nuct5pgwrs4ghao1bpgehxqdd2yt45airc81bpoi9viw9nzpdgo29yuzjm92h8iyn8h4vtfgs9b3rz5wn4pqfa9m32u69d01p65s4wgslobj9faukq7nlpmjqmgzgj45fk9ltc0safebepr2a8qvhpkgxqon0275xlzmgaorcfeso55k0lf6hd2a4gf1kuc6865prnu89gfv7ijx03q91iybc2bepljraqbm535z13tptybyd991j3l4p4zpq40p9uffoe6gdkqf7ketugqqpzxrz2csgo99tgf81da6u58l7ltnisgdm9cft1zwuqszkkz9j1nhe9dwr7xl6rurxoy45aol89vz310163khbh4xq5cgi3z5t7waz21580utwijrwhiaqktd1muv1l2iehkaj9284jgrs43o2cxtfbk8u4vhbnlo1ujmv4fjvii2vzhcw6ukgn7znoyq4av3tvxs6frovfer6p6le1kbg7tqopdcr6vf5cgnyb2oezd1qknpl1b4cnn1liii11xrlxq4l3rmeadu3z169oho0b0oamei9lfoyspibhouh0ouzvjjggyrose2txnatgnwihxnyjfnifpasrufj5yk4nle8ihssf2qqzcy73wkb8jigbbmijjybfgm6nznkssjkqp3fd9gzu2wyz719vax790wiwjvjmaudnu869jrovghal09vlufwi5t006svb98k8myoklv52r942equiqhdxvbcg144kspoxict0972l1o0l08zqu2wbnv6q0rdf7h3naqiwyyqvdo3l25a74mfpl17hbc5yguzrawhzdybya2ffkti7mstigiyn08aeccol2bcmck3m0fkmsh5dkv0gve37888edjrtrljyxbxig5tbbcwca3eidatn9ee229iwnqbz3e120zrtgiy3v4etyelhsb3h17p87xchy4fsv0i22ab3xlq0y7wfeihfipxzz9nayrxmyj1amro413c3akzsgrndvoq7itvguozs6jer3fv2trm1l058vfujsfb7fvqzpzlhjliho3kg75wkhci92i6cay4qvdfmq8quxlf1koxy2lz1e42wtj3wogszaz2yqzzcmfqjrnknanfndrieu86utqsg6jufln302himqwbpmi5es10quapgtopzqogz7dpymzcy4afp28lp41070ahw2hmv8rummgkj71s33virc2sxso62vhwvzv22odegi87p5zqn7k2dfki224d28lunbib7dqv6bc6dpefm9jq8ae4src4cpyj7gy1jmt2nu9h9pwbg1qqe6elkwxfo5z7h9a02i6weibszlx6wib4inh2jorvt3g297ccuhev4sgogty8txebct21jcssywbbdgtvturx375j8n4hsf8xy57n1oocghaebadoen0t0ek3q4gj6ghl9dvshln2e7mevfgr72cz12p1m8j06wmjql3je8885bdvl14va8e1dth8batf85khbb39xllb2opfeytats1g7geq4jpahvsv0t95xop3lm1l7v8shtf7oc45ryw9vp63omxxxycidulzky5x5xgymg8bnhl8rjtna986skuo6mss5rieu0vq7z0bcucig7rel6j2yp6zhpyus68pfi449urscqwtxk29e37uux5w1swab23yrim23bsicbolbgd4x7wx1qsetntkp8yttcdqeg769l56mac7frojxnoqc9vo05b4ee1fyt99y0x7ph1oq9idylrlreyc66mswfytnvbcjfsgh9cldhun10up2mzxt6154l7nj8i99qyg1u783n65vjyayir3xxe3xpi4zc6taxttsv50bjwuzbtou74xgvbfdr40jdzbstifpmg682y9h1pa6v3z174yt0mrn92153jtiioh7mjrl3qlnxvhogw0oj2hg4dvacdq5p32gp76ea0zos72nsr3jkvpeo0jvhrtlcxz0h8cnqxd5eehpkw195i3ib1o1mw8e2vqc6dfyv5g287ao63um6v5m78zajhx66l57jcshtkbwsf4kvukgp02hkzrcpgjn83tzaojo01nskzy34xwucxbmymgpt6hgznykdypz5zko2hcb5ywero5q26xdtgo9iwgz66k6zx67oaqeecti9dqr2z13zwu7gojcz581pz2wf68s68wc5vpny31wyqs811q8d0a5n56knhy2j9rymtjxdhgzdk2o9pj21hme6hqnlcbkgms8uccfxx2mvt9dwvchgie9be5e34cevjdrbi4h7xflf48sy5z53godxqsowuj81ngg7gqcbicnn5f3gy9tt101stgv385y2onae69u3oxllye5nbltdofrr6siequ6q0gotvxlqugzr5kwen37256dczuljo4dynv4sh5f1oijz64d99ecsu17p8fnfhocs3su3nazuuoapxg1af6uz1bkzsadsg91zq9ros5rw39a5prghsmo9aki8w1yvfq66j40kk5ey0rrn8vpr2um640ku7vmq7jz5jcinh8ju3l6m0l1aeejvf9ms55dnjwam122bsnowrknrpbo1aatktblyfdaghz3zzyn156xbg9y3tun4in6yqmsj0emq4armdgl63kupd7ntijqy3ffvu51jxp6960181ibv4mwx13rms22fiv9t02uzo3dh4ziqnzbjsvtejpgu5m6cwzu6o7txgowckb8lf6m5nlzaykw2umq7pdvwqu2tkab91bxncs02eot5m6cy1tosx1si0xdzi34izhvxfemb7ky1c5kzuz7ekm7z4pfdehm9uko1vmmfmivjbl8fd8cmqciasnhb9tdebs0csesej97dm0x8 == \e\y\n\5\w\4\1\u\k\s\0\v\x\y\p\5\3\o\d\y\x\f\o\j\7\r\6\n\t\1\b\p\n\i\t\x\r\i\d\r\z\5\l\6\q\7\o\f\a\5\n\n\o\x\3\c\9\8\z\w\6\q\d\w\s\9\9\5\j\t\s\b\6\9\h\7\x\g\9\4\a\3\k\5\2\4\v\a\2\h\t\t\w\w\9\u\0\k\r\e\e\k\2\j\1\w\x\2\5\4\2\s\b\m\1\z\e\h\b\k\e\y\5\u\2\0\o\w\w\u\t\3\y\f\t\b\a\n\m\n\s\m\z\h\v\i\6\e\n\d\9\x\3\p\x\l\m\h\h\l\9\7\d\4\p\7\w\w\e\4\s\d\x\h\2\f\2\f\c\r\v\n\5\i\o\i\3\z\f\j\x\k\v\y\k\f\e\q\s\h\f\c\z\7\k\z\5\v\z\b\r\m\o\o\h\x\z\z\5\l\e\2\e\c\q\t\6\t\x\l\2\c\2\d\m\w\p\9\m\g\f\j\e\8\b\q\i\w\7\w\c\d\5\m\i\0\a\9\7\t\r\r\d\7\9\y\f\r\p\b\w\o\k\9\p\p\p\k\j\j\b\6\j\7\m\a\4\v\r\m\d\8\p\u\k\8\v\w\l\3\c\f\g\u\4\z\k\i\2\r\9\d\l\3\a\9\s\7\v\w\u\9\v\p\s\r\9\3\6\m\6\9\h\n\j\8\1\1\j\2\y\c\9\e\1\k\0\t\t\b\r\u\9\u\t\8\0\s\7\x\i\k\v\n\h\z\6\t\n\8\j\7\b\j\p\4\9\q\d\0\g\d\i\r\t\1\p\o\e\e\f\9\h\9\1\1\e\b\e\6\m\i\3\x\t\3\u\o\w\d\8\f\o\t\g\v\t\1\9\6\s\7\7\r\e\f\f\u\4\b\i\x\o\2\n\1\o\j\4\9\k\b\7\6\l\l\v\w\k\0\o\q\8\s\s\2\u\w\9\m\q\i\p\l\q\e\f\d\r\d\1\b\v\r\0\z\0\m\f\n\v\3\1\4\v\u\1\i\e\u\v\5\a\f\7\w\y\q\j\2\x\z\8\c\j\m\r\4\d\3\8\b\p\o\x\t\6\m\m\j\4\l\c\7\b\0\0\3\c\z\t\t\5\h\2\c\b\0\m\a\9\w\8\4\f\a\i\4\f\3\3\v\8\s\7\h\2\8\5\f\z\8\7\e\u\w\v\h\q\q\n\u\o\p\k\m\y\w\u\g\x\o\k\2\v\3\h\g\1\o\6\r\r\d\v\l\2\l\l\j\l\8\w\3\w\u\l\g\1\l\d\k\4\s\1\8\x\6\h\j\i\r\1\w\w\t\a\z\d\3\i\x\f\f\w\r\4\p\h\g\w\t\b\9\2\p\6\b\7\d\d\s\l\r\t\u\8\q\h\t\c\l\v\k\r\7\0\u\t\z\h\r\p\4\c\g\8\c\j\i\4\q\g\3\s\v\q\y\9\b\9\t\s\1\w\t\r\w\i\z\r\y\1\q\u\a\z\t\5\y\9\a\m\d\z\6\u\9\v\r\g\7\f\s\y\p\7\y\5\d\s\q\e\5\s\1\j\q\q\h\6\z\u\t\c\h\x\x\7\2\2\c\w\6\4\1\y\b\l\l\q\i\3\1\a\o\x\n\p\t\e\v\7\k\c\2\z\s\m\c\f\d\b\u\z\8\g\2\c\p\2\u\v\m\u\g\c\t\o\a\q\y\1\m\c\3\8\c\x\8\c\g\r\5\8\z\6\0\w\n\j\a\v\9\6\4\b\f\n\4\f\4\r\f\1\w\m\m\k\n\v\e\z\4\5\4\0\3\k\5\b\0\h\l\5\l\0\r\f\d\y\0\8\w\o\u\8\9\9\5\3\n\f\x\n\l\2\j\9\v\y\s\v\f\c\v\u\2\i\i\o\l\x\1\w\u\h\1\b\y\q\n\c\z\6\e\3\a\z\h\d\7\h\k\j\b\o\r\t\e\3\p\1\c\f\y\p\7\e\h\k\7\4\g\i\t\u\4\r\0\1\h\i\h\h\g\c\x\r\5\n\d\k\s\z\w\0\z\i\p\m\6\k\1\i\h\5\8\a\g\5\o\q\w\2\3\s\p\4\s\o\4\i\b\y\7\2\y\2\m\e\m\c\0\o\4\o\f\b\p\n\8\1\f\h\x\m\c\r\o\3\x\5\1\c\v\p\w\r\v\r\e\n\b\h\p\j\g\5\0\n\j\8\0\o\t\c\h\z\v\s\3\g\k\h\7\g\6\8\t\h\k\e\9\k\q\4\7\p\6\7\3\o\y\u\q\o\v\w\k\v\i\c\2\w\m\h\4\q\h\e\a\6\w\i\b\m\0\q\y\y\f\l\b\i\7\q\0\4\i\c\j\v\4\3\s\o\n\3\7\p\t\e\x\7\p\k\w\b\2\5\f\9\6\m\w\w\0\a\y\q\o\d\r\i\o\7\8\k\a\7\5\y\d\w\a\j\1\6\c\s\f\a\2\l\0\q\w\d\r\7\p\0\8\c\y\y\b\y\o\y\s\h\7\6\q\n\k\l\5\6\x\n\i\s\o\p\h\f\f\7\u\p\s\p\a\9\9\5\7\b\4\x\r\p\6\2\0\v\1\x\t\m\h\g\5\6\n\u\c\t\5\p\g\w\r\s\4\g\h\a\o\1\b\p\g\e\h\x\q\d\d\2\y\t\4\5\a\i\r\c\8\1\b\p\o\i\9\v\i\w\9\n\z\p\d\g\o\2\9\y\u\z\j\m\9\2\h\8\i\y\n\8\h\4\v\t\f\g\s\9\b\3\r\z\5\w\n\4\p\q\f\a\9\m\3\2\u\6\9\d\0\1\p\6\5\s\4\w\g\s\l\o\b\j\9\f\a\u\k\q\7\n\l\p\m\j\q\m\g\z\g\j\4\5\f\k\9\l\t\c\0\s\a\f\e\b\e\p\r\2\a\8\q\v\h\p\k\g\x\q\o\n\0\2\7\5\x\l\z\m\g\a\o\r\c\f\e\s\o\5\5\k\0\l\f\6\h\d\2\a\4\g\f\1\k\u\c\6\8\6\5\p\r\n\u\8\9\g\f\v\7\i\j\x\0\3\q\9\1\i\y\b\c\2\b\e\p\l\j\r\a\q\b\m\5\3\5\z\1\3\t\p\t\y\b\y\d\9\9\1\j\3\l\4\p\4\z\p\q\4\0\p\9\u\f\f\o\e\6\g\d\k\q\f\7\k\e\t\u\g\q\q\p\z\x\r\z\2\c\s\g\o\9\9\t\g\f\8\1\d\a\6\u\5\8\l\7\l\t\n\i\s\g\d\m\9\c\f\t\1\z\w\u\q\s\z\k\k\z\9\j\1\n\h\e\9\d\w\r\7\x\l\6\r\u\r\x\o\y\4\5\a\o\l\8\9\v\z\3\1\0\1\6\3\k\h\b\h\4\x\q\5\c\g\i\3\z\5\t\7\w\a\z\2\1\5\8\0\u\t\w\i\j\r\w\h\i\a\q\k\t\d\1\m\u\v\1\l\2\i\e\h\k\a\j\9\2\8\4\j\g\r\s\4\3\o\2\c\x\t\f\b\k\8\u\4\v\h\b\n\l\o\1\u\j\m\v\4\f\j\v\i\i\2\v\z\h\c\w\6\u\k\g\n\7\z\n\o\y\q\4\a\v\3\t\v\x\s\6\f\r\o\v\f\e\r\6\p\6\l\e\1\k\b\g\7\t\q\o\p\d\c\r\6\v\f\5\c\g\n\y\b\2\o\e\z\d\1\q\k\n\p\l\1\b\4\c\n\n\1\l\i\i\i\1\1\x\r\l\x\q\4\l\3\r\m\e\a\d\u\3\z\1\6\9\o\h\o\0\b\0\o\a\m\e\i\9\l\f\o\y\s\p\i\b\h\o\u\h\0\o\u\z\v\j\j\g\g\y\r\o\s\e\2\t\x\n\a\t\g\n\w\i\h\x\n\y\j\f\n\i\f\p\a\s\r\u\f\j\5\y\k\4\n\l\e\8\i\h\s\s\f\2\q\q\z\c\y\7\3\w\k\b\8\j\i\g\b\b\m\i\j\j\y\b\f\g\m\6\n\z\n\k\s\s\j\k\q\p\3\f\d\9\g\z\u\2\w\y\z\7\1\9\v\a\x\7\9\0\w\i\w\j\v\j\m\a\u\d\n\u\8\6\9\j\r\o\v\g\h\a\l\0\9\v\l\u\f\w\i\5\t\0\0\6\s\v\b\9\8\k\8\m\y\o\k\l\v\5\2\r\9\4\2\e\q\u\i\q\h\d\x\v\b\c\g\1\4\4\k\s\p\o\x\i\c\t\0\9\7\2\l\1\o\0\l\0\8\z\q\u\2\w\b\n\v\6\q\0\r\d\f\7\h\3\n\a\q\i\w\y\y\q\v\d\o\3\l\2\5\a\7\4\m\f\p\l\1\7\h\b\c\5\y\g\u\z\r\a\w\h\z\d\y\b\y\a\2\f\f\k\t\i\7\m\s\t\i\g\i\y\n\0\8\a\e\c\c\o\l\2\b\c\m\c\k\3\m\0\f\k\m\s\h\5\d\k\v\0\g\v\e\3\7\8\8\8\e\d\j\r\t\r\l\j\y\x\b\x\i\g\5\t\b\b\c\w\c\a\3\e\i\d\a\t\n\9\e\e\2\2\9\i\w\n\q\b\z\3\e\1\2\0\z\r\t\g\i\y\3\v\4\e\t\y\e\l\h\s\b\3\h\1\7\p\8\7\x\c\h\y\4\f\s\v\0\i\2\2\a\b\3\x\l\q\0\y\7\w\f\e\i\h\f\i\p\x\z\z\9\n\a\y\r\x\m\y\j\1\a\m\r\o\4\1\3\c\3\a\k\z\s\g\r\n\d\v\o\q\7\i\t\v\g\u\o\z\s\6\j\e\r\3\f\v\2\t\r\m\1\l\0\5\8\v\f\u\j\s\f\b\7\f\v\q\z\p\z\l\h\j\l\i\h\o\3\k\g\7\5\w\k\h\c\i\9\2\i\6\c\a\y\4\q\v\d\f\m\q\8\q\u\x\l\f\1\k\o\x\y\2\l\z\1\e\4\2\w\t\j\3\w\o\g\s\z\a\z\2\y\q\z\z\c\m\f\q\j\r\n\k\n\a\n\f\n\d\r\i\e\u\8\6\u\t\q\s\g\6\j\u\f\l\n\3\0\2\h\i\m\q\w\b\p\m\i\5\e\s\1\0\q\u\a\p\g\t\o\p\z\q\o\g\z\7\d\p\y\m\z\c\y\4\a\f\p\2\8\l\p\4\1\0\7\0\a\h\w\2\h\m\v\8\r\u\m\m\g\k\j\7\1\s\3\3\v\i\r\c\2\s\x\s\o\6\2\v\h\w\v\z\v\2\2\o\d\e\g\i\8\7\p\5\z\q\n\7\k\2\d\f\k\i\2\2\4\d\2\8\l\u\n\b\i\b\7\d\q\v\6\b\c\6\d\p\e\f\m\9\j\q\8\a\e\4\s\r\c\4\c\p\y\j\7\g\y\1\j\m\t\2\n\u\9\h\9\p\w\b\g\1\q\q\e\6\e\l\k\w\x\f\o\5\z\7\h\9\a\0\2\i\6\w\e\i\b\s\z\l\x\6\w\i\b\4\i\n\h\2\j\o\r\v\t\3\g\2\9\7\c\c\u\h\e\v\4\s\g\o\g\t\y\8\t\x\e\b\c\t\2\1\j\c\s\s\y\w\b\b\d\g\t\v\t\u\r\x\3\7\5\j\8\n\4\h\s\f\8\x\y\5\7\n\1\o\o\c\g\h\a\e\b\a\d\o\e\n\0\t\0\e\k\3\q\4\g\j\6\g\h\l\9\d\v\s\h\l\n\2\e\7\m\e\v\f\g\r\7\2\c\z\1\2\p\1\m\8\j\0\6\w\m\j\q\l\3\j\e\8\8\8\5\b\d\v\l\1\4\v\a\8\e\1\d\t\h\8\b\a\t\f\8\5\k\h\b\b\3\9\x\l\l\b\2\o\p\f\e\y\t\a\t\s\1\g\7\g\e\q\4\j\p\a\h\v\s\v\0\t\9\5\x\o\p\3\l\m\1\l\7\v\8\s\h\t\f\7\o\c\4\5\r\y\w\9\v\p\6\3\o\m\x\x\x\y\c\i\d\u\l\z\k\y\5\x\5\x\g\y\m\g\8\b\n\h\l\8\r\j\t\n\a\9\8\6\s\k\u\o\6\m\s\s\5\r\i\e\u\0\v\q\7\z\0\b\c\u\c\i\g\7\r\e\l\6\j\2\y\p\6\z\h\p\y\u\s\6\8\p\f\i\4\4\9\u\r\s\c\q\w\t\x\k\2\9\e\3\7\u\u\x\5\w\1\s\w\a\b\2\3\y\r\i\m\2\3\b\s\i\c\b\o\l\b\g\d\4\x\7\w\x\1\q\s\e\t\n\t\k\p\8\y\t\t\c\d\q\e\g\7\6\9\l\5\6\m\a\c\7\f\r\o\j\x\n\o\q\c\9\v\o\0\5\b\4\e\e\1\f\y\t\9\9\y\0\x\7\p\h\1\o\q\9\i\d\y\l\r\l\r\e\y\c\6\6\m\s\w\f\y\t\n\v\b\c\j\f\s\g\h\9\c\l\d\h\u\n\1\0\u\p\2\m\z\x\t\6\1\5\4\l\7\n\j\8\i\9\9\q\y\g\1\u\7\8\3\n\6\5\v\j\y\a\y\i\r\3\x\x\e\3\x\p\i\4\z\c\6\t\a\x\t\t\s\v\5\0\b\j\w\u\z\b\t\o\u\7\4\x\g\v\b\f\d\r\4\0\j\d\z\b\s\t\i\f\p\m\g\6\8\2\y\9\h\1\p\a\6\v\3\z\1\7\4\y\t\0\m\r\n\9\2\1\5\3\j\t\i\i\o\h\7\m\j\r\l\3\q\l\n\x\v\h\o\g\w\0\o\j\2\h\g\4\d\v\a\c\d\q\5\p\3\2\g\p\7\6\e\a\0\z\o\s\7\2\n\s\r\3\j\k\v\p\e\o\0\j\v\h\r\t\l\c\x\z\0\h\8\c\n\q\x\d\5\e\e\h\p\k\w\1\9\5\i\3\i\b\1\o\1\m\w\8\e\2\v\q\c\6\d\f\y\v\5\g\2\8\7\a\o\6\3\u\m\6\v\5\m\7\8\z\a\j\h\x\6\6\l\5\7\j\c\s\h\t\k\b\w\s\f\4\k\v\u\k\g\p\0\2\h\k\z\r\c\p\g\j\n\8\3\t\z\a\o\j\o\0\1\n\s\k\z\y\3\4\x\w\u\c\x\b\m\y\m\g\p\t\6\h\g\z\n\y\k\d\y\p\z\5\z\k\o\2\h\c\b\5\y\w\e\r\o\5\q\2\6\x\d\t\g\o\9\i\w\g\z\6\6\k\6\z\x\6\7\o\a\q\e\e\c\t\i\9\d\q\r\2\z\1\3\z\w\u\7\g\o\j\c\z\5\8\1\p\z\2\w\f\6\8\s\6\8\w\c\5\v\p\n\y\3\1\w\y\q\s\8\1\1\q\8\d\0\a\5\n\5\6\k\n\h\y\2\j\9\r\y\m\t\j\x\d\h\g\z\d\k\2\o\9\p\j\2\1\h\m\e\6\h\q\n\l\c\b\k\g\m\s\8\u\c\c\f\x\x\2\m\v\t\9\d\w\v\c\h\g\i\e\9\b\e\5\e\3\4\c\e\v\j\d\r\b\i\4\h\7\x\f\l\f\4\8\s\y\5\z\5\3\g\o\d\x\q\s\o\w\u\j\8\1\n\g\g\7\g\q\c\b\i\c\n\n\5\f\3\g\y\9\t\t\1\0\1\s\t\g\v\3\8\5\y\2\o\n\a\e\6\9\u\3\o\x\l\l\y\e\5\n\b\l\t\d\o\f\r\r\6\s\i\e\q\u\6\q\0\g\o\t\v\x\l\q\u\g\z\r\5\k\w\e\n\3\7\2\5\6\d\c\z\u\l\j\o\4\d\y\n\v\4\s\h\5\f\1\o\i\j\z\6\4\d\9\9\e\c\s\u\1\7\p\8\f\n\f\h\o\c\s\3\s\u\3\n\a\z\u\u\o\a\p\x\g\1\a\f\6\u\z\1\b\k\z\s\a\d\s\g\9\1\z\q\9\r\o\s\5\r\w\3\9\a\5\p\r\g\h\s\m\o\9\a\k\i\8\w\1\y\v\f\q\6\6\j\4\0\k\k\5\e\y\0\r\r\n\8\v\p\r\2\u\m\6\4\0\k\u\7\v\m\q\7\j\z\5\j\c\i\n\h\8\j\u\3\l\6\m\0\l\1\a\e\e\j\v\f\9\m\s\5\5\d\n\j\w\a\m\1\2\2\b\s\n\o\w\r\k\n\r\p\b\o\1\a\a\t\k\t\b\l\y\f\d\a\g\h\z\3\z\z\y\n\1\5\6\x\b\g\9\y\3\t\u\n\4\i\n\6\y\q\m\s\j\0\e\m\q\4\a\r\m\d\g\l\6\3\k\u\p\d\7\n\t\i\j\q\y\3\f\f\v\u\5\1\j\x\p\6\9\6\0\1\8\1\i\b\v\4\m\w\x\1\3\r\m\s\2\2\f\i\v\9\t\0\2\u\z\o\3\d\h\4\z\i\q\n\z\b\j\s\v\t\e\j\p\g\u\5\m\6\c\w\z\u\6\o\7\t\x\g\o\w\c\k\b\8\l\f\6\m\5\n\l\z\a\y\k\w\2\u\m\q\7\p\d\v\w\q\u\2\t\k\a\b\9\1\b\x\n\c\s\0\2\e\o\t\5\m\6\c\y\1\t\o\s\x\1\s\i\0\x\d\z\i\3\4\i\z\h\v\x\f\e\m\b\7\k\y\1\c\5\k\z\u\z\7\e\k\m\7\z\4\p\f\d\e\h\m\9\u\k\o\1\v\m\m\f\m\i\v\j\b\l\8\f\d\8\c\m\q\c\i\a\s\n\h\b\9\t\d\e\b\s\0\c\s\e\s\e\j\9\7\d\m\0\x\8 ]] 00:06:19.915 00:06:19.915 real 0m1.405s 00:06:19.915 user 0m0.976s 00:06:19.915 sys 0m0.599s 00:06:19.915 ************************************ 00:06:19.915 END TEST dd_rw_offset 00:06:19.915 ************************************ 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.915 09:32:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.915 { 00:06:19.915 "subsystems": [ 00:06:19.915 { 00:06:19.915 "subsystem": "bdev", 00:06:19.915 "config": [ 00:06:19.915 { 00:06:19.915 "params": { 00:06:19.915 "trtype": "pcie", 00:06:19.915 "traddr": "0000:00:10.0", 00:06:19.915 "name": "Nvme0" 00:06:19.915 }, 00:06:19.915 "method": "bdev_nvme_attach_controller" 00:06:19.915 }, 00:06:19.915 { 00:06:19.915 "method": "bdev_wait_for_examine" 00:06:19.915 } 00:06:19.915 ] 00:06:19.915 } 00:06:19.915 ] 00:06:19.915 } 00:06:19.915 [2024-07-15 09:32:14.283986] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:19.915 [2024-07-15 09:32:14.284090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63078 ] 00:06:20.173 [2024-07-15 09:32:14.423722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.173 [2024-07-15 09:32:14.538370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.173 [2024-07-15 09:32:14.595175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.693  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:20.693 00:06:20.693 09:32:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.693 00:06:20.693 real 0m20.074s 00:06:20.693 user 0m14.633s 00:06:20.693 sys 0m7.047s 00:06:20.693 09:32:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.693 ************************************ 00:06:20.693 END TEST spdk_dd_basic_rw 00:06:20.693 ************************************ 00:06:20.693 09:32:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.693 09:32:14 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:20.693 09:32:14 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:20.693 09:32:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.693 09:32:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.693 09:32:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:20.693 ************************************ 00:06:20.693 START TEST spdk_dd_posix 00:06:20.693 ************************************ 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:20.693 * Looking for test storage... 00:06:20.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:20.693 * First test run, liburing in use 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:20.693 ************************************ 00:06:20.693 START TEST dd_flag_append 00:06:20.693 ************************************ 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=yb3dezq9r8smznvn0yrlbb169i5own4b 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:20.693 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:20.694 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:20.694 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=4pksntjwiu30omei305brwszuvan8s3h 00:06:20.694 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s yb3dezq9r8smznvn0yrlbb169i5own4b 00:06:20.694 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 4pksntjwiu30omei305brwszuvan8s3h 00:06:20.694 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:20.951 [2024-07-15 09:32:15.169741] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:20.951 [2024-07-15 09:32:15.169835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63142 ] 00:06:20.951 [2024-07-15 09:32:15.309801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.213 [2024-07-15 09:32:15.428066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.213 [2024-07-15 09:32:15.486348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.474  Copying: 32/32 [B] (average 31 kBps) 00:06:21.474 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 4pksntjwiu30omei305brwszuvan8s3hyb3dezq9r8smznvn0yrlbb169i5own4b == \4\p\k\s\n\t\j\w\i\u\3\0\o\m\e\i\3\0\5\b\r\w\s\z\u\v\a\n\8\s\3\h\y\b\3\d\e\z\q\9\r\8\s\m\z\n\v\n\0\y\r\l\b\b\1\6\9\i\5\o\w\n\4\b ]] 00:06:21.474 00:06:21.474 real 0m0.636s 00:06:21.474 user 0m0.369s 00:06:21.474 sys 0m0.280s 00:06:21.474 ************************************ 00:06:21.474 END TEST dd_flag_append 00:06:21.474 ************************************ 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.474 ************************************ 00:06:21.474 START TEST dd_flag_directory 00:06:21.474 ************************************ 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.474 09:32:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.474 [2024-07-15 09:32:15.835846] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:21.474 [2024-07-15 09:32:15.835931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63171 ] 00:06:21.732 [2024-07-15 09:32:15.973142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.732 [2024-07-15 09:32:16.110791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.732 [2024-07-15 09:32:16.170274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.990 [2024-07-15 09:32:16.208376] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:21.990 [2024-07-15 09:32:16.208464] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:21.990 [2024-07-15 09:32:16.208485] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.990 [2024-07-15 09:32:16.331766] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:21.990 09:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:22.248 [2024-07-15 09:32:16.491448] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:22.248 [2024-07-15 09:32:16.491545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63180 ] 00:06:22.248 [2024-07-15 09:32:16.626467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.506 [2024-07-15 09:32:16.743990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.506 [2024-07-15 09:32:16.799405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.507 [2024-07-15 09:32:16.836729] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.507 [2024-07-15 09:32:16.836815] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:22.507 [2024-07-15 09:32:16.836850] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:22.507 [2024-07-15 09:32:16.955069] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.766 00:06:22.766 real 0m1.275s 00:06:22.766 user 0m0.757s 00:06:22.766 sys 0m0.306s 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:22.766 ************************************ 00:06:22.766 END TEST dd_flag_directory 00:06:22.766 ************************************ 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:22.766 ************************************ 00:06:22.766 START TEST dd_flag_nofollow 00:06:22.766 ************************************ 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:22.766 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.766 [2024-07-15 09:32:17.183412] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:22.767 [2024-07-15 09:32:17.183516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63214 ] 00:06:23.025 [2024-07-15 09:32:17.319287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.025 [2024-07-15 09:32:17.439054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.283 [2024-07-15 09:32:17.494274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.283 [2024-07-15 09:32:17.528490] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.283 [2024-07-15 09:32:17.528565] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:23.283 [2024-07-15 09:32:17.528584] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.283 [2024-07-15 09:32:17.643817] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:23.283 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:23.283 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:23.283 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:23.541 09:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:23.541 [2024-07-15 09:32:17.817261] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:23.541 [2024-07-15 09:32:17.818133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63224 ] 00:06:23.541 [2024-07-15 09:32:17.963422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.802 [2024-07-15 09:32:18.083731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.802 [2024-07-15 09:32:18.141457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.802 [2024-07-15 09:32:18.178770] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:23.802 [2024-07-15 09:32:18.178838] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:23.802 [2024-07-15 09:32:18.178874] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.062 [2024-07-15 09:32:18.301528] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:24.062 09:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.062 [2024-07-15 09:32:18.475485] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:24.062 [2024-07-15 09:32:18.475608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63231 ] 00:06:24.320 [2024-07-15 09:32:18.613502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.320 [2024-07-15 09:32:18.740351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.577 [2024-07-15 09:32:18.799102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.835  Copying: 512/512 [B] (average 500 kBps) 00:06:24.835 00:06:24.835 ************************************ 00:06:24.835 END TEST dd_flag_nofollow 00:06:24.835 ************************************ 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 0ziwssoebhq9mkej902mxxtsmnls3jepk14nlmuwpkjexee1z4jf7qjfbeogjak5dkyj6lgqa4be0jvu5css8sr5chvuqpfj6aakcws4365mugfhm7hpk1xks8whmbuchhrjsk4503aa4h2h67toydms3447mxdrplgt05j0odbiat7dvzqzl3o5kyf5icbsao48y0bpzdougves3t62wiqo9whpbd1thfrvf3595274eemcagor3jvpmq4jr13d8q8i56wdn8zvhhzyzb2yq68g9e8m0tju7s4nxhevnm1zif5re4ka0sbk13smx85wtdp1wp41ra8agfczq9vyg8tnvtr5jt9ejiwlnec9h6owcm1orfr5hbwz5ctyfiqf07gz6fvn6z33tryzm7ie4togj1szudbwnn23wxtrk48mdaq52uj7wfaw64b4u82ityhdzygt5qiioi0lswcnxyiohgtv2bnvd0lodyr85k8if7gxz364nr6rm16rifyr == \0\z\i\w\s\s\o\e\b\h\q\9\m\k\e\j\9\0\2\m\x\x\t\s\m\n\l\s\3\j\e\p\k\1\4\n\l\m\u\w\p\k\j\e\x\e\e\1\z\4\j\f\7\q\j\f\b\e\o\g\j\a\k\5\d\k\y\j\6\l\g\q\a\4\b\e\0\j\v\u\5\c\s\s\8\s\r\5\c\h\v\u\q\p\f\j\6\a\a\k\c\w\s\4\3\6\5\m\u\g\f\h\m\7\h\p\k\1\x\k\s\8\w\h\m\b\u\c\h\h\r\j\s\k\4\5\0\3\a\a\4\h\2\h\6\7\t\o\y\d\m\s\3\4\4\7\m\x\d\r\p\l\g\t\0\5\j\0\o\d\b\i\a\t\7\d\v\z\q\z\l\3\o\5\k\y\f\5\i\c\b\s\a\o\4\8\y\0\b\p\z\d\o\u\g\v\e\s\3\t\6\2\w\i\q\o\9\w\h\p\b\d\1\t\h\f\r\v\f\3\5\9\5\2\7\4\e\e\m\c\a\g\o\r\3\j\v\p\m\q\4\j\r\1\3\d\8\q\8\i\5\6\w\d\n\8\z\v\h\h\z\y\z\b\2\y\q\6\8\g\9\e\8\m\0\t\j\u\7\s\4\n\x\h\e\v\n\m\1\z\i\f\5\r\e\4\k\a\0\s\b\k\1\3\s\m\x\8\5\w\t\d\p\1\w\p\4\1\r\a\8\a\g\f\c\z\q\9\v\y\g\8\t\n\v\t\r\5\j\t\9\e\j\i\w\l\n\e\c\9\h\6\o\w\c\m\1\o\r\f\r\5\h\b\w\z\5\c\t\y\f\i\q\f\0\7\g\z\6\f\v\n\6\z\3\3\t\r\y\z\m\7\i\e\4\t\o\g\j\1\s\z\u\d\b\w\n\n\2\3\w\x\t\r\k\4\8\m\d\a\q\5\2\u\j\7\w\f\a\w\6\4\b\4\u\8\2\i\t\y\h\d\z\y\g\t\5\q\i\i\o\i\0\l\s\w\c\n\x\y\i\o\h\g\t\v\2\b\n\v\d\0\l\o\d\y\r\8\5\k\8\i\f\7\g\x\z\3\6\4\n\r\6\r\m\1\6\r\i\f\y\r ]] 00:06:24.835 00:06:24.835 real 0m1.952s 00:06:24.835 user 0m1.141s 00:06:24.835 sys 0m0.619s 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:24.835 ************************************ 00:06:24.835 START TEST dd_flag_noatime 00:06:24.835 ************************************ 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721035938 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721035939 00:06:24.835 09:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:25.794 09:32:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.794 [2024-07-15 09:32:20.203948] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:25.794 [2024-07-15 09:32:20.204042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63275 ] 00:06:26.051 [2024-07-15 09:32:20.344874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.051 [2024-07-15 09:32:20.473247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.309 [2024-07-15 09:32:20.531509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.568  Copying: 512/512 [B] (average 500 kBps) 00:06:26.568 00:06:26.568 09:32:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:26.568 09:32:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721035938 )) 00:06:26.568 09:32:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.568 09:32:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721035939 )) 00:06:26.568 09:32:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.568 [2024-07-15 09:32:20.846599] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:26.568 [2024-07-15 09:32:20.846692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63293 ] 00:06:26.568 [2024-07-15 09:32:20.984110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.827 [2024-07-15 09:32:21.101247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.827 [2024-07-15 09:32:21.154079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.085  Copying: 512/512 [B] (average 500 kBps) 00:06:27.085 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:27.085 ************************************ 00:06:27.085 END TEST dd_flag_noatime 00:06:27.085 ************************************ 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721035941 )) 00:06:27.085 00:06:27.085 real 0m2.287s 00:06:27.085 user 0m0.747s 00:06:27.085 sys 0m0.579s 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:27.085 ************************************ 00:06:27.085 START TEST dd_flags_misc 00:06:27.085 ************************************ 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.085 09:32:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:27.085 [2024-07-15 09:32:21.521501] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:27.085 [2024-07-15 09:32:21.521596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63321 ] 00:06:27.344 [2024-07-15 09:32:21.659921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.344 [2024-07-15 09:32:21.772386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.603 [2024-07-15 09:32:21.828435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:27.861  Copying: 512/512 [B] (average 500 kBps) 00:06:27.861 00:06:27.862 09:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa2y5jmitmo18uek4pv0cceyqqus5k46zcnixrpz7hn1edlr6qqp1edsmhqhmac6awxynzdkdlcqhkzl7msvg6jpy8ajh4vksccb4d2299oxnqjuocdn4g0nxfgq70y58bk4a0tx90x9ad0s6xxvtw62s4snk936uuvehpuq6jxf5on0gqvzgz92ox3d2lz22kom43aw6nyjpksicztg7f06z983ojh6zjr8lov1l0uqp0hw0bzt3rvp2hiaxc1a51i1vv16081t8j5tc8knga9m2a2pwcmui95sijlgk9ddkkxr2xitcet70jeg7azah18he9h0mblmmb6r8f860l7os7ebu9qrfveg2284w9jawpdf6lso9h4ajhezcbejop02p47g1tobj1hztcglozbgqawu6qisvriq7qpa1lbvhr1zdeoua08gxtf7gz98aue4jj6m9qz2f971snh7dfa6gwklcjs6jyxywsg492pe5cp1mdqexfcqewy1xced == \w\a\2\y\5\j\m\i\t\m\o\1\8\u\e\k\4\p\v\0\c\c\e\y\q\q\u\s\5\k\4\6\z\c\n\i\x\r\p\z\7\h\n\1\e\d\l\r\6\q\q\p\1\e\d\s\m\h\q\h\m\a\c\6\a\w\x\y\n\z\d\k\d\l\c\q\h\k\z\l\7\m\s\v\g\6\j\p\y\8\a\j\h\4\v\k\s\c\c\b\4\d\2\2\9\9\o\x\n\q\j\u\o\c\d\n\4\g\0\n\x\f\g\q\7\0\y\5\8\b\k\4\a\0\t\x\9\0\x\9\a\d\0\s\6\x\x\v\t\w\6\2\s\4\s\n\k\9\3\6\u\u\v\e\h\p\u\q\6\j\x\f\5\o\n\0\g\q\v\z\g\z\9\2\o\x\3\d\2\l\z\2\2\k\o\m\4\3\a\w\6\n\y\j\p\k\s\i\c\z\t\g\7\f\0\6\z\9\8\3\o\j\h\6\z\j\r\8\l\o\v\1\l\0\u\q\p\0\h\w\0\b\z\t\3\r\v\p\2\h\i\a\x\c\1\a\5\1\i\1\v\v\1\6\0\8\1\t\8\j\5\t\c\8\k\n\g\a\9\m\2\a\2\p\w\c\m\u\i\9\5\s\i\j\l\g\k\9\d\d\k\k\x\r\2\x\i\t\c\e\t\7\0\j\e\g\7\a\z\a\h\1\8\h\e\9\h\0\m\b\l\m\m\b\6\r\8\f\8\6\0\l\7\o\s\7\e\b\u\9\q\r\f\v\e\g\2\2\8\4\w\9\j\a\w\p\d\f\6\l\s\o\9\h\4\a\j\h\e\z\c\b\e\j\o\p\0\2\p\4\7\g\1\t\o\b\j\1\h\z\t\c\g\l\o\z\b\g\q\a\w\u\6\q\i\s\v\r\i\q\7\q\p\a\1\l\b\v\h\r\1\z\d\e\o\u\a\0\8\g\x\t\f\7\g\z\9\8\a\u\e\4\j\j\6\m\9\q\z\2\f\9\7\1\s\n\h\7\d\f\a\6\g\w\k\l\c\j\s\6\j\y\x\y\w\s\g\4\9\2\p\e\5\c\p\1\m\d\q\e\x\f\c\q\e\w\y\1\x\c\e\d ]] 00:06:27.862 09:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.862 09:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:27.862 [2024-07-15 09:32:22.132340] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:27.862 [2024-07-15 09:32:22.132446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63331 ] 00:06:27.862 [2024-07-15 09:32:22.267254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.121 [2024-07-15 09:32:22.382678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.121 [2024-07-15 09:32:22.437686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.380  Copying: 512/512 [B] (average 500 kBps) 00:06:28.380 00:06:28.380 09:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa2y5jmitmo18uek4pv0cceyqqus5k46zcnixrpz7hn1edlr6qqp1edsmhqhmac6awxynzdkdlcqhkzl7msvg6jpy8ajh4vksccb4d2299oxnqjuocdn4g0nxfgq70y58bk4a0tx90x9ad0s6xxvtw62s4snk936uuvehpuq6jxf5on0gqvzgz92ox3d2lz22kom43aw6nyjpksicztg7f06z983ojh6zjr8lov1l0uqp0hw0bzt3rvp2hiaxc1a51i1vv16081t8j5tc8knga9m2a2pwcmui95sijlgk9ddkkxr2xitcet70jeg7azah18he9h0mblmmb6r8f860l7os7ebu9qrfveg2284w9jawpdf6lso9h4ajhezcbejop02p47g1tobj1hztcglozbgqawu6qisvriq7qpa1lbvhr1zdeoua08gxtf7gz98aue4jj6m9qz2f971snh7dfa6gwklcjs6jyxywsg492pe5cp1mdqexfcqewy1xced == \w\a\2\y\5\j\m\i\t\m\o\1\8\u\e\k\4\p\v\0\c\c\e\y\q\q\u\s\5\k\4\6\z\c\n\i\x\r\p\z\7\h\n\1\e\d\l\r\6\q\q\p\1\e\d\s\m\h\q\h\m\a\c\6\a\w\x\y\n\z\d\k\d\l\c\q\h\k\z\l\7\m\s\v\g\6\j\p\y\8\a\j\h\4\v\k\s\c\c\b\4\d\2\2\9\9\o\x\n\q\j\u\o\c\d\n\4\g\0\n\x\f\g\q\7\0\y\5\8\b\k\4\a\0\t\x\9\0\x\9\a\d\0\s\6\x\x\v\t\w\6\2\s\4\s\n\k\9\3\6\u\u\v\e\h\p\u\q\6\j\x\f\5\o\n\0\g\q\v\z\g\z\9\2\o\x\3\d\2\l\z\2\2\k\o\m\4\3\a\w\6\n\y\j\p\k\s\i\c\z\t\g\7\f\0\6\z\9\8\3\o\j\h\6\z\j\r\8\l\o\v\1\l\0\u\q\p\0\h\w\0\b\z\t\3\r\v\p\2\h\i\a\x\c\1\a\5\1\i\1\v\v\1\6\0\8\1\t\8\j\5\t\c\8\k\n\g\a\9\m\2\a\2\p\w\c\m\u\i\9\5\s\i\j\l\g\k\9\d\d\k\k\x\r\2\x\i\t\c\e\t\7\0\j\e\g\7\a\z\a\h\1\8\h\e\9\h\0\m\b\l\m\m\b\6\r\8\f\8\6\0\l\7\o\s\7\e\b\u\9\q\r\f\v\e\g\2\2\8\4\w\9\j\a\w\p\d\f\6\l\s\o\9\h\4\a\j\h\e\z\c\b\e\j\o\p\0\2\p\4\7\g\1\t\o\b\j\1\h\z\t\c\g\l\o\z\b\g\q\a\w\u\6\q\i\s\v\r\i\q\7\q\p\a\1\l\b\v\h\r\1\z\d\e\o\u\a\0\8\g\x\t\f\7\g\z\9\8\a\u\e\4\j\j\6\m\9\q\z\2\f\9\7\1\s\n\h\7\d\f\a\6\g\w\k\l\c\j\s\6\j\y\x\y\w\s\g\4\9\2\p\e\5\c\p\1\m\d\q\e\x\f\c\q\e\w\y\1\x\c\e\d ]] 00:06:28.380 09:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.380 09:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:28.380 [2024-07-15 09:32:22.753414] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:28.380 [2024-07-15 09:32:22.753531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63340 ] 00:06:28.639 [2024-07-15 09:32:22.894188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.639 [2024-07-15 09:32:23.015658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.639 [2024-07-15 09:32:23.072220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.987  Copying: 512/512 [B] (average 125 kBps) 00:06:28.987 00:06:28.987 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa2y5jmitmo18uek4pv0cceyqqus5k46zcnixrpz7hn1edlr6qqp1edsmhqhmac6awxynzdkdlcqhkzl7msvg6jpy8ajh4vksccb4d2299oxnqjuocdn4g0nxfgq70y58bk4a0tx90x9ad0s6xxvtw62s4snk936uuvehpuq6jxf5on0gqvzgz92ox3d2lz22kom43aw6nyjpksicztg7f06z983ojh6zjr8lov1l0uqp0hw0bzt3rvp2hiaxc1a51i1vv16081t8j5tc8knga9m2a2pwcmui95sijlgk9ddkkxr2xitcet70jeg7azah18he9h0mblmmb6r8f860l7os7ebu9qrfveg2284w9jawpdf6lso9h4ajhezcbejop02p47g1tobj1hztcglozbgqawu6qisvriq7qpa1lbvhr1zdeoua08gxtf7gz98aue4jj6m9qz2f971snh7dfa6gwklcjs6jyxywsg492pe5cp1mdqexfcqewy1xced == \w\a\2\y\5\j\m\i\t\m\o\1\8\u\e\k\4\p\v\0\c\c\e\y\q\q\u\s\5\k\4\6\z\c\n\i\x\r\p\z\7\h\n\1\e\d\l\r\6\q\q\p\1\e\d\s\m\h\q\h\m\a\c\6\a\w\x\y\n\z\d\k\d\l\c\q\h\k\z\l\7\m\s\v\g\6\j\p\y\8\a\j\h\4\v\k\s\c\c\b\4\d\2\2\9\9\o\x\n\q\j\u\o\c\d\n\4\g\0\n\x\f\g\q\7\0\y\5\8\b\k\4\a\0\t\x\9\0\x\9\a\d\0\s\6\x\x\v\t\w\6\2\s\4\s\n\k\9\3\6\u\u\v\e\h\p\u\q\6\j\x\f\5\o\n\0\g\q\v\z\g\z\9\2\o\x\3\d\2\l\z\2\2\k\o\m\4\3\a\w\6\n\y\j\p\k\s\i\c\z\t\g\7\f\0\6\z\9\8\3\o\j\h\6\z\j\r\8\l\o\v\1\l\0\u\q\p\0\h\w\0\b\z\t\3\r\v\p\2\h\i\a\x\c\1\a\5\1\i\1\v\v\1\6\0\8\1\t\8\j\5\t\c\8\k\n\g\a\9\m\2\a\2\p\w\c\m\u\i\9\5\s\i\j\l\g\k\9\d\d\k\k\x\r\2\x\i\t\c\e\t\7\0\j\e\g\7\a\z\a\h\1\8\h\e\9\h\0\m\b\l\m\m\b\6\r\8\f\8\6\0\l\7\o\s\7\e\b\u\9\q\r\f\v\e\g\2\2\8\4\w\9\j\a\w\p\d\f\6\l\s\o\9\h\4\a\j\h\e\z\c\b\e\j\o\p\0\2\p\4\7\g\1\t\o\b\j\1\h\z\t\c\g\l\o\z\b\g\q\a\w\u\6\q\i\s\v\r\i\q\7\q\p\a\1\l\b\v\h\r\1\z\d\e\o\u\a\0\8\g\x\t\f\7\g\z\9\8\a\u\e\4\j\j\6\m\9\q\z\2\f\9\7\1\s\n\h\7\d\f\a\6\g\w\k\l\c\j\s\6\j\y\x\y\w\s\g\4\9\2\p\e\5\c\p\1\m\d\q\e\x\f\c\q\e\w\y\1\x\c\e\d ]] 00:06:28.987 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:28.987 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:28.987 [2024-07-15 09:32:23.373979] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:28.987 [2024-07-15 09:32:23.374057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63354 ] 00:06:29.260 [2024-07-15 09:32:23.508941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.260 [2024-07-15 09:32:23.629466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.260 [2024-07-15 09:32:23.684604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.519  Copying: 512/512 [B] (average 250 kBps) 00:06:29.519 00:06:29.519 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ wa2y5jmitmo18uek4pv0cceyqqus5k46zcnixrpz7hn1edlr6qqp1edsmhqhmac6awxynzdkdlcqhkzl7msvg6jpy8ajh4vksccb4d2299oxnqjuocdn4g0nxfgq70y58bk4a0tx90x9ad0s6xxvtw62s4snk936uuvehpuq6jxf5on0gqvzgz92ox3d2lz22kom43aw6nyjpksicztg7f06z983ojh6zjr8lov1l0uqp0hw0bzt3rvp2hiaxc1a51i1vv16081t8j5tc8knga9m2a2pwcmui95sijlgk9ddkkxr2xitcet70jeg7azah18he9h0mblmmb6r8f860l7os7ebu9qrfveg2284w9jawpdf6lso9h4ajhezcbejop02p47g1tobj1hztcglozbgqawu6qisvriq7qpa1lbvhr1zdeoua08gxtf7gz98aue4jj6m9qz2f971snh7dfa6gwklcjs6jyxywsg492pe5cp1mdqexfcqewy1xced == \w\a\2\y\5\j\m\i\t\m\o\1\8\u\e\k\4\p\v\0\c\c\e\y\q\q\u\s\5\k\4\6\z\c\n\i\x\r\p\z\7\h\n\1\e\d\l\r\6\q\q\p\1\e\d\s\m\h\q\h\m\a\c\6\a\w\x\y\n\z\d\k\d\l\c\q\h\k\z\l\7\m\s\v\g\6\j\p\y\8\a\j\h\4\v\k\s\c\c\b\4\d\2\2\9\9\o\x\n\q\j\u\o\c\d\n\4\g\0\n\x\f\g\q\7\0\y\5\8\b\k\4\a\0\t\x\9\0\x\9\a\d\0\s\6\x\x\v\t\w\6\2\s\4\s\n\k\9\3\6\u\u\v\e\h\p\u\q\6\j\x\f\5\o\n\0\g\q\v\z\g\z\9\2\o\x\3\d\2\l\z\2\2\k\o\m\4\3\a\w\6\n\y\j\p\k\s\i\c\z\t\g\7\f\0\6\z\9\8\3\o\j\h\6\z\j\r\8\l\o\v\1\l\0\u\q\p\0\h\w\0\b\z\t\3\r\v\p\2\h\i\a\x\c\1\a\5\1\i\1\v\v\1\6\0\8\1\t\8\j\5\t\c\8\k\n\g\a\9\m\2\a\2\p\w\c\m\u\i\9\5\s\i\j\l\g\k\9\d\d\k\k\x\r\2\x\i\t\c\e\t\7\0\j\e\g\7\a\z\a\h\1\8\h\e\9\h\0\m\b\l\m\m\b\6\r\8\f\8\6\0\l\7\o\s\7\e\b\u\9\q\r\f\v\e\g\2\2\8\4\w\9\j\a\w\p\d\f\6\l\s\o\9\h\4\a\j\h\e\z\c\b\e\j\o\p\0\2\p\4\7\g\1\t\o\b\j\1\h\z\t\c\g\l\o\z\b\g\q\a\w\u\6\q\i\s\v\r\i\q\7\q\p\a\1\l\b\v\h\r\1\z\d\e\o\u\a\0\8\g\x\t\f\7\g\z\9\8\a\u\e\4\j\j\6\m\9\q\z\2\f\9\7\1\s\n\h\7\d\f\a\6\g\w\k\l\c\j\s\6\j\y\x\y\w\s\g\4\9\2\p\e\5\c\p\1\m\d\q\e\x\f\c\q\e\w\y\1\x\c\e\d ]] 00:06:29.519 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:29.519 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:29.519 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:29.519 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:29.519 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:29.519 09:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:29.778 [2024-07-15 09:32:24.031077] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:29.778 [2024-07-15 09:32:24.031236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63365 ] 00:06:29.778 [2024-07-15 09:32:24.170909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.037 [2024-07-15 09:32:24.299764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.037 [2024-07-15 09:32:24.354969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.295  Copying: 512/512 [B] (average 500 kBps) 00:06:30.295 00:06:30.295 09:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gwwrjrsqnzx4dr3qhnfpy36evsh8bjh1lz1kfsnn0pwe3fkfwye1x7fgguyhoo5x3faqalz3vtepr9437ea0daud4ft81z38jemisevbl2b776k8dvfszdb6xk10zvl3y0ggjl4q90w0gwy684qq9974kxsztedx8u9uwg8tibtoirpprydr5jbjmih5h78ol3tl9yawphgoztc5dtpw096ttwaxaj0x1vcj3zzlgpkd8s1s4lrq3aq8hez21ge5fgvh7kznpoii4p3mx4bocvsrqfv0u18qs1tpqt84wvvb0t7jn693ur11u8y8884y4jtgwv9soi2p9fqd2fsc3keeqrhgs8128v918c4wvcc45d9shnbyl9e2xm1z334i7smssbks50s874uf8ixgln30jnq6tk6n5h12hqudpcwnnkznbs6uaytrlu8syheocf50ru3q5nj13crp3uealywxx4b0xoxk8v80ufft6gs7zu7wc2wjo7txi9gbl36b == \g\w\w\r\j\r\s\q\n\z\x\4\d\r\3\q\h\n\f\p\y\3\6\e\v\s\h\8\b\j\h\1\l\z\1\k\f\s\n\n\0\p\w\e\3\f\k\f\w\y\e\1\x\7\f\g\g\u\y\h\o\o\5\x\3\f\a\q\a\l\z\3\v\t\e\p\r\9\4\3\7\e\a\0\d\a\u\d\4\f\t\8\1\z\3\8\j\e\m\i\s\e\v\b\l\2\b\7\7\6\k\8\d\v\f\s\z\d\b\6\x\k\1\0\z\v\l\3\y\0\g\g\j\l\4\q\9\0\w\0\g\w\y\6\8\4\q\q\9\9\7\4\k\x\s\z\t\e\d\x\8\u\9\u\w\g\8\t\i\b\t\o\i\r\p\p\r\y\d\r\5\j\b\j\m\i\h\5\h\7\8\o\l\3\t\l\9\y\a\w\p\h\g\o\z\t\c\5\d\t\p\w\0\9\6\t\t\w\a\x\a\j\0\x\1\v\c\j\3\z\z\l\g\p\k\d\8\s\1\s\4\l\r\q\3\a\q\8\h\e\z\2\1\g\e\5\f\g\v\h\7\k\z\n\p\o\i\i\4\p\3\m\x\4\b\o\c\v\s\r\q\f\v\0\u\1\8\q\s\1\t\p\q\t\8\4\w\v\v\b\0\t\7\j\n\6\9\3\u\r\1\1\u\8\y\8\8\8\4\y\4\j\t\g\w\v\9\s\o\i\2\p\9\f\q\d\2\f\s\c\3\k\e\e\q\r\h\g\s\8\1\2\8\v\9\1\8\c\4\w\v\c\c\4\5\d\9\s\h\n\b\y\l\9\e\2\x\m\1\z\3\3\4\i\7\s\m\s\s\b\k\s\5\0\s\8\7\4\u\f\8\i\x\g\l\n\3\0\j\n\q\6\t\k\6\n\5\h\1\2\h\q\u\d\p\c\w\n\n\k\z\n\b\s\6\u\a\y\t\r\l\u\8\s\y\h\e\o\c\f\5\0\r\u\3\q\5\n\j\1\3\c\r\p\3\u\e\a\l\y\w\x\x\4\b\0\x\o\x\k\8\v\8\0\u\f\f\t\6\g\s\7\z\u\7\w\c\2\w\j\o\7\t\x\i\9\g\b\l\3\6\b ]] 00:06:30.295 09:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.295 09:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:30.295 [2024-07-15 09:32:24.678619] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:30.295 [2024-07-15 09:32:24.678737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63374 ] 00:06:30.553 [2024-07-15 09:32:24.814450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.553 [2024-07-15 09:32:24.950227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.553 [2024-07-15 09:32:25.008167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.812  Copying: 512/512 [B] (average 500 kBps) 00:06:30.812 00:06:30.812 09:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gwwrjrsqnzx4dr3qhnfpy36evsh8bjh1lz1kfsnn0pwe3fkfwye1x7fgguyhoo5x3faqalz3vtepr9437ea0daud4ft81z38jemisevbl2b776k8dvfszdb6xk10zvl3y0ggjl4q90w0gwy684qq9974kxsztedx8u9uwg8tibtoirpprydr5jbjmih5h78ol3tl9yawphgoztc5dtpw096ttwaxaj0x1vcj3zzlgpkd8s1s4lrq3aq8hez21ge5fgvh7kznpoii4p3mx4bocvsrqfv0u18qs1tpqt84wvvb0t7jn693ur11u8y8884y4jtgwv9soi2p9fqd2fsc3keeqrhgs8128v918c4wvcc45d9shnbyl9e2xm1z334i7smssbks50s874uf8ixgln30jnq6tk6n5h12hqudpcwnnkznbs6uaytrlu8syheocf50ru3q5nj13crp3uealywxx4b0xoxk8v80ufft6gs7zu7wc2wjo7txi9gbl36b == \g\w\w\r\j\r\s\q\n\z\x\4\d\r\3\q\h\n\f\p\y\3\6\e\v\s\h\8\b\j\h\1\l\z\1\k\f\s\n\n\0\p\w\e\3\f\k\f\w\y\e\1\x\7\f\g\g\u\y\h\o\o\5\x\3\f\a\q\a\l\z\3\v\t\e\p\r\9\4\3\7\e\a\0\d\a\u\d\4\f\t\8\1\z\3\8\j\e\m\i\s\e\v\b\l\2\b\7\7\6\k\8\d\v\f\s\z\d\b\6\x\k\1\0\z\v\l\3\y\0\g\g\j\l\4\q\9\0\w\0\g\w\y\6\8\4\q\q\9\9\7\4\k\x\s\z\t\e\d\x\8\u\9\u\w\g\8\t\i\b\t\o\i\r\p\p\r\y\d\r\5\j\b\j\m\i\h\5\h\7\8\o\l\3\t\l\9\y\a\w\p\h\g\o\z\t\c\5\d\t\p\w\0\9\6\t\t\w\a\x\a\j\0\x\1\v\c\j\3\z\z\l\g\p\k\d\8\s\1\s\4\l\r\q\3\a\q\8\h\e\z\2\1\g\e\5\f\g\v\h\7\k\z\n\p\o\i\i\4\p\3\m\x\4\b\o\c\v\s\r\q\f\v\0\u\1\8\q\s\1\t\p\q\t\8\4\w\v\v\b\0\t\7\j\n\6\9\3\u\r\1\1\u\8\y\8\8\8\4\y\4\j\t\g\w\v\9\s\o\i\2\p\9\f\q\d\2\f\s\c\3\k\e\e\q\r\h\g\s\8\1\2\8\v\9\1\8\c\4\w\v\c\c\4\5\d\9\s\h\n\b\y\l\9\e\2\x\m\1\z\3\3\4\i\7\s\m\s\s\b\k\s\5\0\s\8\7\4\u\f\8\i\x\g\l\n\3\0\j\n\q\6\t\k\6\n\5\h\1\2\h\q\u\d\p\c\w\n\n\k\z\n\b\s\6\u\a\y\t\r\l\u\8\s\y\h\e\o\c\f\5\0\r\u\3\q\5\n\j\1\3\c\r\p\3\u\e\a\l\y\w\x\x\4\b\0\x\o\x\k\8\v\8\0\u\f\f\t\6\g\s\7\z\u\7\w\c\2\w\j\o\7\t\x\i\9\g\b\l\3\6\b ]] 00:06:30.812 09:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:30.812 09:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:31.071 [2024-07-15 09:32:25.298645] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:31.071 [2024-07-15 09:32:25.298741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63384 ] 00:06:31.071 [2024-07-15 09:32:25.434655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.329 [2024-07-15 09:32:25.550228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.329 [2024-07-15 09:32:25.605133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.587  Copying: 512/512 [B] (average 125 kBps) 00:06:31.588 00:06:31.588 09:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gwwrjrsqnzx4dr3qhnfpy36evsh8bjh1lz1kfsnn0pwe3fkfwye1x7fgguyhoo5x3faqalz3vtepr9437ea0daud4ft81z38jemisevbl2b776k8dvfszdb6xk10zvl3y0ggjl4q90w0gwy684qq9974kxsztedx8u9uwg8tibtoirpprydr5jbjmih5h78ol3tl9yawphgoztc5dtpw096ttwaxaj0x1vcj3zzlgpkd8s1s4lrq3aq8hez21ge5fgvh7kznpoii4p3mx4bocvsrqfv0u18qs1tpqt84wvvb0t7jn693ur11u8y8884y4jtgwv9soi2p9fqd2fsc3keeqrhgs8128v918c4wvcc45d9shnbyl9e2xm1z334i7smssbks50s874uf8ixgln30jnq6tk6n5h12hqudpcwnnkznbs6uaytrlu8syheocf50ru3q5nj13crp3uealywxx4b0xoxk8v80ufft6gs7zu7wc2wjo7txi9gbl36b == \g\w\w\r\j\r\s\q\n\z\x\4\d\r\3\q\h\n\f\p\y\3\6\e\v\s\h\8\b\j\h\1\l\z\1\k\f\s\n\n\0\p\w\e\3\f\k\f\w\y\e\1\x\7\f\g\g\u\y\h\o\o\5\x\3\f\a\q\a\l\z\3\v\t\e\p\r\9\4\3\7\e\a\0\d\a\u\d\4\f\t\8\1\z\3\8\j\e\m\i\s\e\v\b\l\2\b\7\7\6\k\8\d\v\f\s\z\d\b\6\x\k\1\0\z\v\l\3\y\0\g\g\j\l\4\q\9\0\w\0\g\w\y\6\8\4\q\q\9\9\7\4\k\x\s\z\t\e\d\x\8\u\9\u\w\g\8\t\i\b\t\o\i\r\p\p\r\y\d\r\5\j\b\j\m\i\h\5\h\7\8\o\l\3\t\l\9\y\a\w\p\h\g\o\z\t\c\5\d\t\p\w\0\9\6\t\t\w\a\x\a\j\0\x\1\v\c\j\3\z\z\l\g\p\k\d\8\s\1\s\4\l\r\q\3\a\q\8\h\e\z\2\1\g\e\5\f\g\v\h\7\k\z\n\p\o\i\i\4\p\3\m\x\4\b\o\c\v\s\r\q\f\v\0\u\1\8\q\s\1\t\p\q\t\8\4\w\v\v\b\0\t\7\j\n\6\9\3\u\r\1\1\u\8\y\8\8\8\4\y\4\j\t\g\w\v\9\s\o\i\2\p\9\f\q\d\2\f\s\c\3\k\e\e\q\r\h\g\s\8\1\2\8\v\9\1\8\c\4\w\v\c\c\4\5\d\9\s\h\n\b\y\l\9\e\2\x\m\1\z\3\3\4\i\7\s\m\s\s\b\k\s\5\0\s\8\7\4\u\f\8\i\x\g\l\n\3\0\j\n\q\6\t\k\6\n\5\h\1\2\h\q\u\d\p\c\w\n\n\k\z\n\b\s\6\u\a\y\t\r\l\u\8\s\y\h\e\o\c\f\5\0\r\u\3\q\5\n\j\1\3\c\r\p\3\u\e\a\l\y\w\x\x\4\b\0\x\o\x\k\8\v\8\0\u\f\f\t\6\g\s\7\z\u\7\w\c\2\w\j\o\7\t\x\i\9\g\b\l\3\6\b ]] 00:06:31.588 09:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:31.588 09:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:31.588 [2024-07-15 09:32:25.934667] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:31.588 [2024-07-15 09:32:25.934757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63399 ] 00:06:31.846 [2024-07-15 09:32:26.066277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.846 [2024-07-15 09:32:26.186139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.846 [2024-07-15 09:32:26.242193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.104  Copying: 512/512 [B] (average 250 kBps) 00:06:32.104 00:06:32.104 ************************************ 00:06:32.104 END TEST dd_flags_misc 00:06:32.104 ************************************ 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gwwrjrsqnzx4dr3qhnfpy36evsh8bjh1lz1kfsnn0pwe3fkfwye1x7fgguyhoo5x3faqalz3vtepr9437ea0daud4ft81z38jemisevbl2b776k8dvfszdb6xk10zvl3y0ggjl4q90w0gwy684qq9974kxsztedx8u9uwg8tibtoirpprydr5jbjmih5h78ol3tl9yawphgoztc5dtpw096ttwaxaj0x1vcj3zzlgpkd8s1s4lrq3aq8hez21ge5fgvh7kznpoii4p3mx4bocvsrqfv0u18qs1tpqt84wvvb0t7jn693ur11u8y8884y4jtgwv9soi2p9fqd2fsc3keeqrhgs8128v918c4wvcc45d9shnbyl9e2xm1z334i7smssbks50s874uf8ixgln30jnq6tk6n5h12hqudpcwnnkznbs6uaytrlu8syheocf50ru3q5nj13crp3uealywxx4b0xoxk8v80ufft6gs7zu7wc2wjo7txi9gbl36b == \g\w\w\r\j\r\s\q\n\z\x\4\d\r\3\q\h\n\f\p\y\3\6\e\v\s\h\8\b\j\h\1\l\z\1\k\f\s\n\n\0\p\w\e\3\f\k\f\w\y\e\1\x\7\f\g\g\u\y\h\o\o\5\x\3\f\a\q\a\l\z\3\v\t\e\p\r\9\4\3\7\e\a\0\d\a\u\d\4\f\t\8\1\z\3\8\j\e\m\i\s\e\v\b\l\2\b\7\7\6\k\8\d\v\f\s\z\d\b\6\x\k\1\0\z\v\l\3\y\0\g\g\j\l\4\q\9\0\w\0\g\w\y\6\8\4\q\q\9\9\7\4\k\x\s\z\t\e\d\x\8\u\9\u\w\g\8\t\i\b\t\o\i\r\p\p\r\y\d\r\5\j\b\j\m\i\h\5\h\7\8\o\l\3\t\l\9\y\a\w\p\h\g\o\z\t\c\5\d\t\p\w\0\9\6\t\t\w\a\x\a\j\0\x\1\v\c\j\3\z\z\l\g\p\k\d\8\s\1\s\4\l\r\q\3\a\q\8\h\e\z\2\1\g\e\5\f\g\v\h\7\k\z\n\p\o\i\i\4\p\3\m\x\4\b\o\c\v\s\r\q\f\v\0\u\1\8\q\s\1\t\p\q\t\8\4\w\v\v\b\0\t\7\j\n\6\9\3\u\r\1\1\u\8\y\8\8\8\4\y\4\j\t\g\w\v\9\s\o\i\2\p\9\f\q\d\2\f\s\c\3\k\e\e\q\r\h\g\s\8\1\2\8\v\9\1\8\c\4\w\v\c\c\4\5\d\9\s\h\n\b\y\l\9\e\2\x\m\1\z\3\3\4\i\7\s\m\s\s\b\k\s\5\0\s\8\7\4\u\f\8\i\x\g\l\n\3\0\j\n\q\6\t\k\6\n\5\h\1\2\h\q\u\d\p\c\w\n\n\k\z\n\b\s\6\u\a\y\t\r\l\u\8\s\y\h\e\o\c\f\5\0\r\u\3\q\5\n\j\1\3\c\r\p\3\u\e\a\l\y\w\x\x\4\b\0\x\o\x\k\8\v\8\0\u\f\f\t\6\g\s\7\z\u\7\w\c\2\w\j\o\7\t\x\i\9\g\b\l\3\6\b ]] 00:06:32.104 00:06:32.104 real 0m5.051s 00:06:32.104 user 0m2.985s 00:06:32.104 sys 0m2.277s 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:32.104 * Second test run, disabling liburing, forcing AIO 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.104 ************************************ 00:06:32.104 START TEST dd_flag_append_forced_aio 00:06:32.104 ************************************ 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=c33l8n1xtjee0eerkr1zfdqd2jzjvyfa 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=zzhmk8hbmusm0urlpszfg4l928g6n676 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s c33l8n1xtjee0eerkr1zfdqd2jzjvyfa 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s zzhmk8hbmusm0urlpszfg4l928g6n676 00:06:32.104 09:32:26 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:32.363 [2024-07-15 09:32:26.614790] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:32.363 [2024-07-15 09:32:26.614875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63426 ] 00:06:32.363 [2024-07-15 09:32:26.750589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.622 [2024-07-15 09:32:26.869207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.622 [2024-07-15 09:32:26.922467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.885  Copying: 32/32 [B] (average 31 kBps) 00:06:32.885 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ zzhmk8hbmusm0urlpszfg4l928g6n676c33l8n1xtjee0eerkr1zfdqd2jzjvyfa == \z\z\h\m\k\8\h\b\m\u\s\m\0\u\r\l\p\s\z\f\g\4\l\9\2\8\g\6\n\6\7\6\c\3\3\l\8\n\1\x\t\j\e\e\0\e\e\r\k\r\1\z\f\d\q\d\2\j\z\j\v\y\f\a ]] 00:06:32.885 00:06:32.885 real 0m0.666s 00:06:32.885 user 0m0.399s 00:06:32.885 sys 0m0.140s 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:32.885 ************************************ 00:06:32.885 END TEST dd_flag_append_forced_aio 00:06:32.885 ************************************ 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.885 ************************************ 00:06:32.885 START TEST dd_flag_directory_forced_aio 00:06:32.885 ************************************ 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.885 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.885 [2024-07-15 09:32:27.319383] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:32.885 [2024-07-15 09:32:27.319481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63454 ] 00:06:33.152 [2024-07-15 09:32:27.459159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.152 [2024-07-15 09:32:27.605159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.409 [2024-07-15 09:32:27.663326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.409 [2024-07-15 09:32:27.701047] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.409 [2024-07-15 09:32:27.701114] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.409 [2024-07-15 09:32:27.701131] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.409 [2024-07-15 09:32:27.817437] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.667 09:32:27 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:33.667 [2024-07-15 09:32:27.977916] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:33.667 [2024-07-15 09:32:27.978021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63469 ] 00:06:33.667 [2024-07-15 09:32:28.117922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.925 [2024-07-15 09:32:28.240770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.925 [2024-07-15 09:32:28.299480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.925 [2024-07-15 09:32:28.336112] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.925 [2024-07-15 09:32:28.336173] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.925 [2024-07-15 09:32:28.336205] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.183 [2024-07-15 09:32:28.454596] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.183 00:06:34.183 real 0m1.291s 00:06:34.183 user 0m0.761s 00:06:34.183 sys 0m0.316s 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.183 ************************************ 00:06:34.183 END TEST dd_flag_directory_forced_aio 00:06:34.183 ************************************ 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.183 ************************************ 00:06:34.183 START TEST dd_flag_nofollow_forced_aio 00:06:34.183 ************************************ 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.183 09:32:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.441 [2024-07-15 09:32:28.679123] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:34.441 [2024-07-15 09:32:28.679233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63492 ] 00:06:34.441 [2024-07-15 09:32:28.823503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.699 [2024-07-15 09:32:28.957028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.699 [2024-07-15 09:32:29.014566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.699 [2024-07-15 09:32:29.049317] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:34.699 [2024-07-15 09:32:29.049384] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:34.699 [2024-07-15 09:32:29.049400] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.956 [2024-07-15 09:32:29.165298] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.956 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:34.956 [2024-07-15 09:32:29.329864] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:34.956 [2024-07-15 09:32:29.329980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63507 ] 00:06:35.213 [2024-07-15 09:32:29.467753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.213 [2024-07-15 09:32:29.586259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.213 [2024-07-15 09:32:29.640658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.213 [2024-07-15 09:32:29.677005] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.213 [2024-07-15 09:32:29.677296] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.213 [2024-07-15 09:32:29.677319] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.470 [2024-07-15 09:32:29.792027] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:35.470 09:32:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.728 [2024-07-15 09:32:29.948558] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:35.728 [2024-07-15 09:32:29.948638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63520 ] 00:06:35.728 [2024-07-15 09:32:30.084500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.986 [2024-07-15 09:32:30.195650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.986 [2024-07-15 09:32:30.251838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.245  Copying: 512/512 [B] (average 500 kBps) 00:06:36.245 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ v6xpc77xd7usskr5zwo8u0n6dmqysfutr4pkrv8e3uyq8fa5i04nca14mpypv52hy49kk2yd4ac25fgf4qg49td7ukmrd278sl55xep9ghhxm9af9voswed4ziseg8dab9938003lg97eqi9qomsv7dcak0e11valb723d15r62jjt7rtn9oaqvl5sdnoz26o71r4fnobv9g854cng8tmni4k9120hd6cd9ioxa9qyx83wo42k0ktdycgzys1k9c8o9ekriscmvtlxlgnqq3xpinaxgi5di7dl2ql0xe1xaj71sk42dvt7anj01uwugmgkr77rfhkjgjjf2j1d937adcgef3qshb4eij455r6z7genptsc689wr5jkcn3d612wqx9usn2mz3x9rkwlinyamjr47f2g5dl7enn2n2zn6kyk0nkpw8mpsopp5p3rxqok1jy91ih8rjv2m21dyd0c4qjf6yuw0uz4b0dkvisxanxqj6a98y41vikeidpoff == \v\6\x\p\c\7\7\x\d\7\u\s\s\k\r\5\z\w\o\8\u\0\n\6\d\m\q\y\s\f\u\t\r\4\p\k\r\v\8\e\3\u\y\q\8\f\a\5\i\0\4\n\c\a\1\4\m\p\y\p\v\5\2\h\y\4\9\k\k\2\y\d\4\a\c\2\5\f\g\f\4\q\g\4\9\t\d\7\u\k\m\r\d\2\7\8\s\l\5\5\x\e\p\9\g\h\h\x\m\9\a\f\9\v\o\s\w\e\d\4\z\i\s\e\g\8\d\a\b\9\9\3\8\0\0\3\l\g\9\7\e\q\i\9\q\o\m\s\v\7\d\c\a\k\0\e\1\1\v\a\l\b\7\2\3\d\1\5\r\6\2\j\j\t\7\r\t\n\9\o\a\q\v\l\5\s\d\n\o\z\2\6\o\7\1\r\4\f\n\o\b\v\9\g\8\5\4\c\n\g\8\t\m\n\i\4\k\9\1\2\0\h\d\6\c\d\9\i\o\x\a\9\q\y\x\8\3\w\o\4\2\k\0\k\t\d\y\c\g\z\y\s\1\k\9\c\8\o\9\e\k\r\i\s\c\m\v\t\l\x\l\g\n\q\q\3\x\p\i\n\a\x\g\i\5\d\i\7\d\l\2\q\l\0\x\e\1\x\a\j\7\1\s\k\4\2\d\v\t\7\a\n\j\0\1\u\w\u\g\m\g\k\r\7\7\r\f\h\k\j\g\j\j\f\2\j\1\d\9\3\7\a\d\c\g\e\f\3\q\s\h\b\4\e\i\j\4\5\5\r\6\z\7\g\e\n\p\t\s\c\6\8\9\w\r\5\j\k\c\n\3\d\6\1\2\w\q\x\9\u\s\n\2\m\z\3\x\9\r\k\w\l\i\n\y\a\m\j\r\4\7\f\2\g\5\d\l\7\e\n\n\2\n\2\z\n\6\k\y\k\0\n\k\p\w\8\m\p\s\o\p\p\5\p\3\r\x\q\o\k\1\j\y\9\1\i\h\8\r\j\v\2\m\2\1\d\y\d\0\c\4\q\j\f\6\y\u\w\0\u\z\4\b\0\d\k\v\i\s\x\a\n\x\q\j\6\a\9\8\y\4\1\v\i\k\e\i\d\p\o\f\f ]] 00:06:36.245 00:06:36.245 real 0m1.941s 00:06:36.245 user 0m1.142s 00:06:36.245 sys 0m0.456s 00:06:36.245 ************************************ 00:06:36.245 END TEST dd_flag_nofollow_forced_aio 00:06:36.245 ************************************ 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.245 ************************************ 00:06:36.245 START TEST dd_flag_noatime_forced_aio 00:06:36.245 ************************************ 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721035950 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721035950 00:06:36.245 09:32:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:37.192 09:32:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.479 [2024-07-15 09:32:31.682547] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:37.479 [2024-07-15 09:32:31.682652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63555 ] 00:06:37.479 [2024-07-15 09:32:31.822487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.737 [2024-07-15 09:32:31.953817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.737 [2024-07-15 09:32:32.013736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.995  Copying: 512/512 [B] (average 500 kBps) 00:06:37.995 00:06:37.995 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.995 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721035950 )) 00:06:37.995 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.995 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721035950 )) 00:06:37.995 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.996 [2024-07-15 09:32:32.365658] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:37.996 [2024-07-15 09:32:32.365775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63572 ] 00:06:38.252 [2024-07-15 09:32:32.505610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.252 [2024-07-15 09:32:32.641750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.252 [2024-07-15 09:32:32.698836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.510  Copying: 512/512 [B] (average 500 kBps) 00:06:38.510 00:06:38.510 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.510 ************************************ 00:06:38.510 END TEST dd_flag_noatime_forced_aio 00:06:38.510 ************************************ 00:06:38.510 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721035952 )) 00:06:38.510 00:06:38.510 real 0m2.369s 00:06:38.510 user 0m0.790s 00:06:38.510 sys 0m0.339s 00:06:38.510 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.510 09:32:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 ************************************ 00:06:38.768 START TEST dd_flags_misc_forced_aio 00:06:38.768 ************************************ 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.768 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:38.768 [2024-07-15 09:32:33.082930] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:38.768 [2024-07-15 09:32:33.083038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63604 ] 00:06:38.768 [2024-07-15 09:32:33.222012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.026 [2024-07-15 09:32:33.344189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.026 [2024-07-15 09:32:33.401486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.284  Copying: 512/512 [B] (average 500 kBps) 00:06:39.284 00:06:39.284 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrr0xfurd0kfkhkgisa5vwh9o49bs1gx046jx84e70uwd78ut6j406zslwlg6avwhxjef8um0lu32d4d3h20c1kw1184wvbmrodyrasx84ktsro52x9zfss2rqtfux1t3ytnyy76wi2j95rxfmw7rib1w3vensywc3kgga91jjjffc61vi67vi7lt889wiagie8tl63h8p5xseuttj8d47grj5ykk9drxn5z25uyozdmyro42ygljkoosf6sz5hattqfenirr1uyt0629lmsqy7oc88itp39oru9ae3vgpdzn1vex3xlc20ckm1x72mzlddnmpkodd2lfjurn2l894ndaci1p8fj2yaclz1685gz5m8brd5c34szy1h3vjb3g0l4jefgfqgc1m3biu8sy02ieyhx4g2jmpsawi6n6xqxelgxpj37agx1gk1aodq1cbbqy28hajhuz0hzyv41jsdo7c5mf57lzx0xzfswxto1a3hwiof4a87uld76i707 == \z\r\r\0\x\f\u\r\d\0\k\f\k\h\k\g\i\s\a\5\v\w\h\9\o\4\9\b\s\1\g\x\0\4\6\j\x\8\4\e\7\0\u\w\d\7\8\u\t\6\j\4\0\6\z\s\l\w\l\g\6\a\v\w\h\x\j\e\f\8\u\m\0\l\u\3\2\d\4\d\3\h\2\0\c\1\k\w\1\1\8\4\w\v\b\m\r\o\d\y\r\a\s\x\8\4\k\t\s\r\o\5\2\x\9\z\f\s\s\2\r\q\t\f\u\x\1\t\3\y\t\n\y\y\7\6\w\i\2\j\9\5\r\x\f\m\w\7\r\i\b\1\w\3\v\e\n\s\y\w\c\3\k\g\g\a\9\1\j\j\j\f\f\c\6\1\v\i\6\7\v\i\7\l\t\8\8\9\w\i\a\g\i\e\8\t\l\6\3\h\8\p\5\x\s\e\u\t\t\j\8\d\4\7\g\r\j\5\y\k\k\9\d\r\x\n\5\z\2\5\u\y\o\z\d\m\y\r\o\4\2\y\g\l\j\k\o\o\s\f\6\s\z\5\h\a\t\t\q\f\e\n\i\r\r\1\u\y\t\0\6\2\9\l\m\s\q\y\7\o\c\8\8\i\t\p\3\9\o\r\u\9\a\e\3\v\g\p\d\z\n\1\v\e\x\3\x\l\c\2\0\c\k\m\1\x\7\2\m\z\l\d\d\n\m\p\k\o\d\d\2\l\f\j\u\r\n\2\l\8\9\4\n\d\a\c\i\1\p\8\f\j\2\y\a\c\l\z\1\6\8\5\g\z\5\m\8\b\r\d\5\c\3\4\s\z\y\1\h\3\v\j\b\3\g\0\l\4\j\e\f\g\f\q\g\c\1\m\3\b\i\u\8\s\y\0\2\i\e\y\h\x\4\g\2\j\m\p\s\a\w\i\6\n\6\x\q\x\e\l\g\x\p\j\3\7\a\g\x\1\g\k\1\a\o\d\q\1\c\b\b\q\y\2\8\h\a\j\h\u\z\0\h\z\y\v\4\1\j\s\d\o\7\c\5\m\f\5\7\l\z\x\0\x\z\f\s\w\x\t\o\1\a\3\h\w\i\o\f\4\a\8\7\u\l\d\7\6\i\7\0\7 ]] 00:06:39.284 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.284 09:32:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:39.542 [2024-07-15 09:32:33.762277] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:39.542 [2024-07-15 09:32:33.762409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63606 ] 00:06:39.542 [2024-07-15 09:32:33.904307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.800 [2024-07-15 09:32:34.039350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.800 [2024-07-15 09:32:34.100273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.059  Copying: 512/512 [B] (average 500 kBps) 00:06:40.059 00:06:40.059 09:32:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrr0xfurd0kfkhkgisa5vwh9o49bs1gx046jx84e70uwd78ut6j406zslwlg6avwhxjef8um0lu32d4d3h20c1kw1184wvbmrodyrasx84ktsro52x9zfss2rqtfux1t3ytnyy76wi2j95rxfmw7rib1w3vensywc3kgga91jjjffc61vi67vi7lt889wiagie8tl63h8p5xseuttj8d47grj5ykk9drxn5z25uyozdmyro42ygljkoosf6sz5hattqfenirr1uyt0629lmsqy7oc88itp39oru9ae3vgpdzn1vex3xlc20ckm1x72mzlddnmpkodd2lfjurn2l894ndaci1p8fj2yaclz1685gz5m8brd5c34szy1h3vjb3g0l4jefgfqgc1m3biu8sy02ieyhx4g2jmpsawi6n6xqxelgxpj37agx1gk1aodq1cbbqy28hajhuz0hzyv41jsdo7c5mf57lzx0xzfswxto1a3hwiof4a87uld76i707 == \z\r\r\0\x\f\u\r\d\0\k\f\k\h\k\g\i\s\a\5\v\w\h\9\o\4\9\b\s\1\g\x\0\4\6\j\x\8\4\e\7\0\u\w\d\7\8\u\t\6\j\4\0\6\z\s\l\w\l\g\6\a\v\w\h\x\j\e\f\8\u\m\0\l\u\3\2\d\4\d\3\h\2\0\c\1\k\w\1\1\8\4\w\v\b\m\r\o\d\y\r\a\s\x\8\4\k\t\s\r\o\5\2\x\9\z\f\s\s\2\r\q\t\f\u\x\1\t\3\y\t\n\y\y\7\6\w\i\2\j\9\5\r\x\f\m\w\7\r\i\b\1\w\3\v\e\n\s\y\w\c\3\k\g\g\a\9\1\j\j\j\f\f\c\6\1\v\i\6\7\v\i\7\l\t\8\8\9\w\i\a\g\i\e\8\t\l\6\3\h\8\p\5\x\s\e\u\t\t\j\8\d\4\7\g\r\j\5\y\k\k\9\d\r\x\n\5\z\2\5\u\y\o\z\d\m\y\r\o\4\2\y\g\l\j\k\o\o\s\f\6\s\z\5\h\a\t\t\q\f\e\n\i\r\r\1\u\y\t\0\6\2\9\l\m\s\q\y\7\o\c\8\8\i\t\p\3\9\o\r\u\9\a\e\3\v\g\p\d\z\n\1\v\e\x\3\x\l\c\2\0\c\k\m\1\x\7\2\m\z\l\d\d\n\m\p\k\o\d\d\2\l\f\j\u\r\n\2\l\8\9\4\n\d\a\c\i\1\p\8\f\j\2\y\a\c\l\z\1\6\8\5\g\z\5\m\8\b\r\d\5\c\3\4\s\z\y\1\h\3\v\j\b\3\g\0\l\4\j\e\f\g\f\q\g\c\1\m\3\b\i\u\8\s\y\0\2\i\e\y\h\x\4\g\2\j\m\p\s\a\w\i\6\n\6\x\q\x\e\l\g\x\p\j\3\7\a\g\x\1\g\k\1\a\o\d\q\1\c\b\b\q\y\2\8\h\a\j\h\u\z\0\h\z\y\v\4\1\j\s\d\o\7\c\5\m\f\5\7\l\z\x\0\x\z\f\s\w\x\t\o\1\a\3\h\w\i\o\f\4\a\8\7\u\l\d\7\6\i\7\0\7 ]] 00:06:40.059 09:32:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.059 09:32:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:40.059 [2024-07-15 09:32:34.460504] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:40.059 [2024-07-15 09:32:34.460606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63619 ] 00:06:40.318 [2024-07-15 09:32:34.599540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.318 [2024-07-15 09:32:34.719511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.318 [2024-07-15 09:32:34.774610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.865  Copying: 512/512 [B] (average 166 kBps) 00:06:40.865 00:06:40.865 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrr0xfurd0kfkhkgisa5vwh9o49bs1gx046jx84e70uwd78ut6j406zslwlg6avwhxjef8um0lu32d4d3h20c1kw1184wvbmrodyrasx84ktsro52x9zfss2rqtfux1t3ytnyy76wi2j95rxfmw7rib1w3vensywc3kgga91jjjffc61vi67vi7lt889wiagie8tl63h8p5xseuttj8d47grj5ykk9drxn5z25uyozdmyro42ygljkoosf6sz5hattqfenirr1uyt0629lmsqy7oc88itp39oru9ae3vgpdzn1vex3xlc20ckm1x72mzlddnmpkodd2lfjurn2l894ndaci1p8fj2yaclz1685gz5m8brd5c34szy1h3vjb3g0l4jefgfqgc1m3biu8sy02ieyhx4g2jmpsawi6n6xqxelgxpj37agx1gk1aodq1cbbqy28hajhuz0hzyv41jsdo7c5mf57lzx0xzfswxto1a3hwiof4a87uld76i707 == \z\r\r\0\x\f\u\r\d\0\k\f\k\h\k\g\i\s\a\5\v\w\h\9\o\4\9\b\s\1\g\x\0\4\6\j\x\8\4\e\7\0\u\w\d\7\8\u\t\6\j\4\0\6\z\s\l\w\l\g\6\a\v\w\h\x\j\e\f\8\u\m\0\l\u\3\2\d\4\d\3\h\2\0\c\1\k\w\1\1\8\4\w\v\b\m\r\o\d\y\r\a\s\x\8\4\k\t\s\r\o\5\2\x\9\z\f\s\s\2\r\q\t\f\u\x\1\t\3\y\t\n\y\y\7\6\w\i\2\j\9\5\r\x\f\m\w\7\r\i\b\1\w\3\v\e\n\s\y\w\c\3\k\g\g\a\9\1\j\j\j\f\f\c\6\1\v\i\6\7\v\i\7\l\t\8\8\9\w\i\a\g\i\e\8\t\l\6\3\h\8\p\5\x\s\e\u\t\t\j\8\d\4\7\g\r\j\5\y\k\k\9\d\r\x\n\5\z\2\5\u\y\o\z\d\m\y\r\o\4\2\y\g\l\j\k\o\o\s\f\6\s\z\5\h\a\t\t\q\f\e\n\i\r\r\1\u\y\t\0\6\2\9\l\m\s\q\y\7\o\c\8\8\i\t\p\3\9\o\r\u\9\a\e\3\v\g\p\d\z\n\1\v\e\x\3\x\l\c\2\0\c\k\m\1\x\7\2\m\z\l\d\d\n\m\p\k\o\d\d\2\l\f\j\u\r\n\2\l\8\9\4\n\d\a\c\i\1\p\8\f\j\2\y\a\c\l\z\1\6\8\5\g\z\5\m\8\b\r\d\5\c\3\4\s\z\y\1\h\3\v\j\b\3\g\0\l\4\j\e\f\g\f\q\g\c\1\m\3\b\i\u\8\s\y\0\2\i\e\y\h\x\4\g\2\j\m\p\s\a\w\i\6\n\6\x\q\x\e\l\g\x\p\j\3\7\a\g\x\1\g\k\1\a\o\d\q\1\c\b\b\q\y\2\8\h\a\j\h\u\z\0\h\z\y\v\4\1\j\s\d\o\7\c\5\m\f\5\7\l\z\x\0\x\z\f\s\w\x\t\o\1\a\3\h\w\i\o\f\4\a\8\7\u\l\d\7\6\i\7\0\7 ]] 00:06:40.865 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.865 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:40.865 [2024-07-15 09:32:35.140402] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:40.865 [2024-07-15 09:32:35.140519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63632 ] 00:06:40.865 [2024-07-15 09:32:35.274550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.148 [2024-07-15 09:32:35.397714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.148 [2024-07-15 09:32:35.455141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.422  Copying: 512/512 [B] (average 500 kBps) 00:06:41.422 00:06:41.422 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zrr0xfurd0kfkhkgisa5vwh9o49bs1gx046jx84e70uwd78ut6j406zslwlg6avwhxjef8um0lu32d4d3h20c1kw1184wvbmrodyrasx84ktsro52x9zfss2rqtfux1t3ytnyy76wi2j95rxfmw7rib1w3vensywc3kgga91jjjffc61vi67vi7lt889wiagie8tl63h8p5xseuttj8d47grj5ykk9drxn5z25uyozdmyro42ygljkoosf6sz5hattqfenirr1uyt0629lmsqy7oc88itp39oru9ae3vgpdzn1vex3xlc20ckm1x72mzlddnmpkodd2lfjurn2l894ndaci1p8fj2yaclz1685gz5m8brd5c34szy1h3vjb3g0l4jefgfqgc1m3biu8sy02ieyhx4g2jmpsawi6n6xqxelgxpj37agx1gk1aodq1cbbqy28hajhuz0hzyv41jsdo7c5mf57lzx0xzfswxto1a3hwiof4a87uld76i707 == \z\r\r\0\x\f\u\r\d\0\k\f\k\h\k\g\i\s\a\5\v\w\h\9\o\4\9\b\s\1\g\x\0\4\6\j\x\8\4\e\7\0\u\w\d\7\8\u\t\6\j\4\0\6\z\s\l\w\l\g\6\a\v\w\h\x\j\e\f\8\u\m\0\l\u\3\2\d\4\d\3\h\2\0\c\1\k\w\1\1\8\4\w\v\b\m\r\o\d\y\r\a\s\x\8\4\k\t\s\r\o\5\2\x\9\z\f\s\s\2\r\q\t\f\u\x\1\t\3\y\t\n\y\y\7\6\w\i\2\j\9\5\r\x\f\m\w\7\r\i\b\1\w\3\v\e\n\s\y\w\c\3\k\g\g\a\9\1\j\j\j\f\f\c\6\1\v\i\6\7\v\i\7\l\t\8\8\9\w\i\a\g\i\e\8\t\l\6\3\h\8\p\5\x\s\e\u\t\t\j\8\d\4\7\g\r\j\5\y\k\k\9\d\r\x\n\5\z\2\5\u\y\o\z\d\m\y\r\o\4\2\y\g\l\j\k\o\o\s\f\6\s\z\5\h\a\t\t\q\f\e\n\i\r\r\1\u\y\t\0\6\2\9\l\m\s\q\y\7\o\c\8\8\i\t\p\3\9\o\r\u\9\a\e\3\v\g\p\d\z\n\1\v\e\x\3\x\l\c\2\0\c\k\m\1\x\7\2\m\z\l\d\d\n\m\p\k\o\d\d\2\l\f\j\u\r\n\2\l\8\9\4\n\d\a\c\i\1\p\8\f\j\2\y\a\c\l\z\1\6\8\5\g\z\5\m\8\b\r\d\5\c\3\4\s\z\y\1\h\3\v\j\b\3\g\0\l\4\j\e\f\g\f\q\g\c\1\m\3\b\i\u\8\s\y\0\2\i\e\y\h\x\4\g\2\j\m\p\s\a\w\i\6\n\6\x\q\x\e\l\g\x\p\j\3\7\a\g\x\1\g\k\1\a\o\d\q\1\c\b\b\q\y\2\8\h\a\j\h\u\z\0\h\z\y\v\4\1\j\s\d\o\7\c\5\m\f\5\7\l\z\x\0\x\z\f\s\w\x\t\o\1\a\3\h\w\i\o\f\4\a\8\7\u\l\d\7\6\i\7\0\7 ]] 00:06:41.422 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:41.422 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:41.422 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:41.422 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:41.422 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.422 09:32:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:41.422 [2024-07-15 09:32:35.828759] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:41.422 [2024-07-15 09:32:35.828858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63635 ] 00:06:41.680 [2024-07-15 09:32:35.966283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.680 [2024-07-15 09:32:36.077443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.680 [2024-07-15 09:32:36.132408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.939  Copying: 512/512 [B] (average 500 kBps) 00:06:41.939 00:06:41.939 09:32:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ckvz0rubk7m6mbmayukqemmyemjhie8ghi3phyi4mtcz3mlcddccxqayylwaffb2coafu5gq6ebhcrywk00g0oib338452wmkslowwl3ydgdrgow00p7qqasljrfwdriuor4gof36hnizj7mjsxnskeo2j9zxozmgzejhfvxssub6ixqkbmorthn3qhq7ie2h8vn0pprol7ckttzdnf7kwc1yl2g43uypdaasc8ioow87fvyfea2kdsoa55v2v0guam5qhtxd3rh6aa1owhz8kakrtg5kla2t2n57o6jp4nc5aezjuca5ii2osyl1otqay3zr19l64yo5otzb1jl085wfhxqs7urvp64itusq01itocqdu47xj5l7px30z0qvv32eh7l3hq99zd4492tfvml3clea776nt0tyyj8zb2z2k080x7x5qwzl0aapxtnifvve2vd0ztkcilnqufeao0kj04od2ebb4qu2sjbzsza5acpmi1hgi843vn6k97x == \c\k\v\z\0\r\u\b\k\7\m\6\m\b\m\a\y\u\k\q\e\m\m\y\e\m\j\h\i\e\8\g\h\i\3\p\h\y\i\4\m\t\c\z\3\m\l\c\d\d\c\c\x\q\a\y\y\l\w\a\f\f\b\2\c\o\a\f\u\5\g\q\6\e\b\h\c\r\y\w\k\0\0\g\0\o\i\b\3\3\8\4\5\2\w\m\k\s\l\o\w\w\l\3\y\d\g\d\r\g\o\w\0\0\p\7\q\q\a\s\l\j\r\f\w\d\r\i\u\o\r\4\g\o\f\3\6\h\n\i\z\j\7\m\j\s\x\n\s\k\e\o\2\j\9\z\x\o\z\m\g\z\e\j\h\f\v\x\s\s\u\b\6\i\x\q\k\b\m\o\r\t\h\n\3\q\h\q\7\i\e\2\h\8\v\n\0\p\p\r\o\l\7\c\k\t\t\z\d\n\f\7\k\w\c\1\y\l\2\g\4\3\u\y\p\d\a\a\s\c\8\i\o\o\w\8\7\f\v\y\f\e\a\2\k\d\s\o\a\5\5\v\2\v\0\g\u\a\m\5\q\h\t\x\d\3\r\h\6\a\a\1\o\w\h\z\8\k\a\k\r\t\g\5\k\l\a\2\t\2\n\5\7\o\6\j\p\4\n\c\5\a\e\z\j\u\c\a\5\i\i\2\o\s\y\l\1\o\t\q\a\y\3\z\r\1\9\l\6\4\y\o\5\o\t\z\b\1\j\l\0\8\5\w\f\h\x\q\s\7\u\r\v\p\6\4\i\t\u\s\q\0\1\i\t\o\c\q\d\u\4\7\x\j\5\l\7\p\x\3\0\z\0\q\v\v\3\2\e\h\7\l\3\h\q\9\9\z\d\4\4\9\2\t\f\v\m\l\3\c\l\e\a\7\7\6\n\t\0\t\y\y\j\8\z\b\2\z\2\k\0\8\0\x\7\x\5\q\w\z\l\0\a\a\p\x\t\n\i\f\v\v\e\2\v\d\0\z\t\k\c\i\l\n\q\u\f\e\a\o\0\k\j\0\4\o\d\2\e\b\b\4\q\u\2\s\j\b\z\s\z\a\5\a\c\p\m\i\1\h\g\i\8\4\3\v\n\6\k\9\7\x ]] 00:06:41.939 09:32:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.939 09:32:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:42.197 [2024-07-15 09:32:36.443196] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:42.197 [2024-07-15 09:32:36.443292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63648 ] 00:06:42.197 [2024-07-15 09:32:36.577472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.456 [2024-07-15 09:32:36.679255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.456 [2024-07-15 09:32:36.734924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.714  Copying: 512/512 [B] (average 500 kBps) 00:06:42.714 00:06:42.714 09:32:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ckvz0rubk7m6mbmayukqemmyemjhie8ghi3phyi4mtcz3mlcddccxqayylwaffb2coafu5gq6ebhcrywk00g0oib338452wmkslowwl3ydgdrgow00p7qqasljrfwdriuor4gof36hnizj7mjsxnskeo2j9zxozmgzejhfvxssub6ixqkbmorthn3qhq7ie2h8vn0pprol7ckttzdnf7kwc1yl2g43uypdaasc8ioow87fvyfea2kdsoa55v2v0guam5qhtxd3rh6aa1owhz8kakrtg5kla2t2n57o6jp4nc5aezjuca5ii2osyl1otqay3zr19l64yo5otzb1jl085wfhxqs7urvp64itusq01itocqdu47xj5l7px30z0qvv32eh7l3hq99zd4492tfvml3clea776nt0tyyj8zb2z2k080x7x5qwzl0aapxtnifvve2vd0ztkcilnqufeao0kj04od2ebb4qu2sjbzsza5acpmi1hgi843vn6k97x == \c\k\v\z\0\r\u\b\k\7\m\6\m\b\m\a\y\u\k\q\e\m\m\y\e\m\j\h\i\e\8\g\h\i\3\p\h\y\i\4\m\t\c\z\3\m\l\c\d\d\c\c\x\q\a\y\y\l\w\a\f\f\b\2\c\o\a\f\u\5\g\q\6\e\b\h\c\r\y\w\k\0\0\g\0\o\i\b\3\3\8\4\5\2\w\m\k\s\l\o\w\w\l\3\y\d\g\d\r\g\o\w\0\0\p\7\q\q\a\s\l\j\r\f\w\d\r\i\u\o\r\4\g\o\f\3\6\h\n\i\z\j\7\m\j\s\x\n\s\k\e\o\2\j\9\z\x\o\z\m\g\z\e\j\h\f\v\x\s\s\u\b\6\i\x\q\k\b\m\o\r\t\h\n\3\q\h\q\7\i\e\2\h\8\v\n\0\p\p\r\o\l\7\c\k\t\t\z\d\n\f\7\k\w\c\1\y\l\2\g\4\3\u\y\p\d\a\a\s\c\8\i\o\o\w\8\7\f\v\y\f\e\a\2\k\d\s\o\a\5\5\v\2\v\0\g\u\a\m\5\q\h\t\x\d\3\r\h\6\a\a\1\o\w\h\z\8\k\a\k\r\t\g\5\k\l\a\2\t\2\n\5\7\o\6\j\p\4\n\c\5\a\e\z\j\u\c\a\5\i\i\2\o\s\y\l\1\o\t\q\a\y\3\z\r\1\9\l\6\4\y\o\5\o\t\z\b\1\j\l\0\8\5\w\f\h\x\q\s\7\u\r\v\p\6\4\i\t\u\s\q\0\1\i\t\o\c\q\d\u\4\7\x\j\5\l\7\p\x\3\0\z\0\q\v\v\3\2\e\h\7\l\3\h\q\9\9\z\d\4\4\9\2\t\f\v\m\l\3\c\l\e\a\7\7\6\n\t\0\t\y\y\j\8\z\b\2\z\2\k\0\8\0\x\7\x\5\q\w\z\l\0\a\a\p\x\t\n\i\f\v\v\e\2\v\d\0\z\t\k\c\i\l\n\q\u\f\e\a\o\0\k\j\0\4\o\d\2\e\b\b\4\q\u\2\s\j\b\z\s\z\a\5\a\c\p\m\i\1\h\g\i\8\4\3\v\n\6\k\9\7\x ]] 00:06:42.714 09:32:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.714 09:32:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:42.714 [2024-07-15 09:32:37.081760] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:42.715 [2024-07-15 09:32:37.081909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63656 ] 00:06:42.974 [2024-07-15 09:32:37.222604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.974 [2024-07-15 09:32:37.342802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.974 [2024-07-15 09:32:37.400625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.232  Copying: 512/512 [B] (average 166 kBps) 00:06:43.232 00:06:43.491 09:32:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ckvz0rubk7m6mbmayukqemmyemjhie8ghi3phyi4mtcz3mlcddccxqayylwaffb2coafu5gq6ebhcrywk00g0oib338452wmkslowwl3ydgdrgow00p7qqasljrfwdriuor4gof36hnizj7mjsxnskeo2j9zxozmgzejhfvxssub6ixqkbmorthn3qhq7ie2h8vn0pprol7ckttzdnf7kwc1yl2g43uypdaasc8ioow87fvyfea2kdsoa55v2v0guam5qhtxd3rh6aa1owhz8kakrtg5kla2t2n57o6jp4nc5aezjuca5ii2osyl1otqay3zr19l64yo5otzb1jl085wfhxqs7urvp64itusq01itocqdu47xj5l7px30z0qvv32eh7l3hq99zd4492tfvml3clea776nt0tyyj8zb2z2k080x7x5qwzl0aapxtnifvve2vd0ztkcilnqufeao0kj04od2ebb4qu2sjbzsza5acpmi1hgi843vn6k97x == \c\k\v\z\0\r\u\b\k\7\m\6\m\b\m\a\y\u\k\q\e\m\m\y\e\m\j\h\i\e\8\g\h\i\3\p\h\y\i\4\m\t\c\z\3\m\l\c\d\d\c\c\x\q\a\y\y\l\w\a\f\f\b\2\c\o\a\f\u\5\g\q\6\e\b\h\c\r\y\w\k\0\0\g\0\o\i\b\3\3\8\4\5\2\w\m\k\s\l\o\w\w\l\3\y\d\g\d\r\g\o\w\0\0\p\7\q\q\a\s\l\j\r\f\w\d\r\i\u\o\r\4\g\o\f\3\6\h\n\i\z\j\7\m\j\s\x\n\s\k\e\o\2\j\9\z\x\o\z\m\g\z\e\j\h\f\v\x\s\s\u\b\6\i\x\q\k\b\m\o\r\t\h\n\3\q\h\q\7\i\e\2\h\8\v\n\0\p\p\r\o\l\7\c\k\t\t\z\d\n\f\7\k\w\c\1\y\l\2\g\4\3\u\y\p\d\a\a\s\c\8\i\o\o\w\8\7\f\v\y\f\e\a\2\k\d\s\o\a\5\5\v\2\v\0\g\u\a\m\5\q\h\t\x\d\3\r\h\6\a\a\1\o\w\h\z\8\k\a\k\r\t\g\5\k\l\a\2\t\2\n\5\7\o\6\j\p\4\n\c\5\a\e\z\j\u\c\a\5\i\i\2\o\s\y\l\1\o\t\q\a\y\3\z\r\1\9\l\6\4\y\o\5\o\t\z\b\1\j\l\0\8\5\w\f\h\x\q\s\7\u\r\v\p\6\4\i\t\u\s\q\0\1\i\t\o\c\q\d\u\4\7\x\j\5\l\7\p\x\3\0\z\0\q\v\v\3\2\e\h\7\l\3\h\q\9\9\z\d\4\4\9\2\t\f\v\m\l\3\c\l\e\a\7\7\6\n\t\0\t\y\y\j\8\z\b\2\z\2\k\0\8\0\x\7\x\5\q\w\z\l\0\a\a\p\x\t\n\i\f\v\v\e\2\v\d\0\z\t\k\c\i\l\n\q\u\f\e\a\o\0\k\j\0\4\o\d\2\e\b\b\4\q\u\2\s\j\b\z\s\z\a\5\a\c\p\m\i\1\h\g\i\8\4\3\v\n\6\k\9\7\x ]] 00:06:43.491 09:32:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.491 09:32:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:43.491 [2024-07-15 09:32:37.741125] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:43.491 [2024-07-15 09:32:37.741246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63663 ] 00:06:43.491 [2024-07-15 09:32:37.872274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.749 [2024-07-15 09:32:37.981068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.749 [2024-07-15 09:32:38.038440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.007  Copying: 512/512 [B] (average 500 kBps) 00:06:44.007 00:06:44.007 ************************************ 00:06:44.007 END TEST dd_flags_misc_forced_aio 00:06:44.007 ************************************ 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ckvz0rubk7m6mbmayukqemmyemjhie8ghi3phyi4mtcz3mlcddccxqayylwaffb2coafu5gq6ebhcrywk00g0oib338452wmkslowwl3ydgdrgow00p7qqasljrfwdriuor4gof36hnizj7mjsxnskeo2j9zxozmgzejhfvxssub6ixqkbmorthn3qhq7ie2h8vn0pprol7ckttzdnf7kwc1yl2g43uypdaasc8ioow87fvyfea2kdsoa55v2v0guam5qhtxd3rh6aa1owhz8kakrtg5kla2t2n57o6jp4nc5aezjuca5ii2osyl1otqay3zr19l64yo5otzb1jl085wfhxqs7urvp64itusq01itocqdu47xj5l7px30z0qvv32eh7l3hq99zd4492tfvml3clea776nt0tyyj8zb2z2k080x7x5qwzl0aapxtnifvve2vd0ztkcilnqufeao0kj04od2ebb4qu2sjbzsza5acpmi1hgi843vn6k97x == \c\k\v\z\0\r\u\b\k\7\m\6\m\b\m\a\y\u\k\q\e\m\m\y\e\m\j\h\i\e\8\g\h\i\3\p\h\y\i\4\m\t\c\z\3\m\l\c\d\d\c\c\x\q\a\y\y\l\w\a\f\f\b\2\c\o\a\f\u\5\g\q\6\e\b\h\c\r\y\w\k\0\0\g\0\o\i\b\3\3\8\4\5\2\w\m\k\s\l\o\w\w\l\3\y\d\g\d\r\g\o\w\0\0\p\7\q\q\a\s\l\j\r\f\w\d\r\i\u\o\r\4\g\o\f\3\6\h\n\i\z\j\7\m\j\s\x\n\s\k\e\o\2\j\9\z\x\o\z\m\g\z\e\j\h\f\v\x\s\s\u\b\6\i\x\q\k\b\m\o\r\t\h\n\3\q\h\q\7\i\e\2\h\8\v\n\0\p\p\r\o\l\7\c\k\t\t\z\d\n\f\7\k\w\c\1\y\l\2\g\4\3\u\y\p\d\a\a\s\c\8\i\o\o\w\8\7\f\v\y\f\e\a\2\k\d\s\o\a\5\5\v\2\v\0\g\u\a\m\5\q\h\t\x\d\3\r\h\6\a\a\1\o\w\h\z\8\k\a\k\r\t\g\5\k\l\a\2\t\2\n\5\7\o\6\j\p\4\n\c\5\a\e\z\j\u\c\a\5\i\i\2\o\s\y\l\1\o\t\q\a\y\3\z\r\1\9\l\6\4\y\o\5\o\t\z\b\1\j\l\0\8\5\w\f\h\x\q\s\7\u\r\v\p\6\4\i\t\u\s\q\0\1\i\t\o\c\q\d\u\4\7\x\j\5\l\7\p\x\3\0\z\0\q\v\v\3\2\e\h\7\l\3\h\q\9\9\z\d\4\4\9\2\t\f\v\m\l\3\c\l\e\a\7\7\6\n\t\0\t\y\y\j\8\z\b\2\z\2\k\0\8\0\x\7\x\5\q\w\z\l\0\a\a\p\x\t\n\i\f\v\v\e\2\v\d\0\z\t\k\c\i\l\n\q\u\f\e\a\o\0\k\j\0\4\o\d\2\e\b\b\4\q\u\2\s\j\b\z\s\z\a\5\a\c\p\m\i\1\h\g\i\8\4\3\v\n\6\k\9\7\x ]] 00:06:44.007 00:06:44.007 real 0m5.292s 00:06:44.007 user 0m3.087s 00:06:44.007 sys 0m1.208s 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.007 ************************************ 00:06:44.007 END TEST spdk_dd_posix 00:06:44.007 ************************************ 00:06:44.007 00:06:44.007 real 0m23.366s 00:06:44.007 user 0m12.373s 00:06:44.007 sys 0m6.878s 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.007 09:32:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.007 09:32:38 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:44.007 09:32:38 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:44.007 09:32:38 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.007 09:32:38 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.007 09:32:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:44.007 ************************************ 00:06:44.007 START TEST spdk_dd_malloc 00:06:44.007 ************************************ 00:06:44.007 09:32:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:44.266 * Looking for test storage... 00:06:44.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:44.266 ************************************ 00:06:44.266 START TEST dd_malloc_copy 00:06:44.266 ************************************ 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:44.266 09:32:38 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.266 [2024-07-15 09:32:38.566970] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:44.266 [2024-07-15 09:32:38.567325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63737 ] 00:06:44.266 { 00:06:44.266 "subsystems": [ 00:06:44.266 { 00:06:44.266 "subsystem": "bdev", 00:06:44.266 "config": [ 00:06:44.266 { 00:06:44.266 "params": { 00:06:44.266 "block_size": 512, 00:06:44.266 "num_blocks": 1048576, 00:06:44.266 "name": "malloc0" 00:06:44.266 }, 00:06:44.266 "method": "bdev_malloc_create" 00:06:44.266 }, 00:06:44.266 { 00:06:44.266 "params": { 00:06:44.266 "block_size": 512, 00:06:44.266 "num_blocks": 1048576, 00:06:44.266 "name": "malloc1" 00:06:44.266 }, 00:06:44.266 "method": "bdev_malloc_create" 00:06:44.266 }, 00:06:44.266 { 00:06:44.266 "method": "bdev_wait_for_examine" 00:06:44.266 } 00:06:44.266 ] 00:06:44.266 } 00:06:44.266 ] 00:06:44.266 } 00:06:44.266 [2024-07-15 09:32:38.707408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.524 [2024-07-15 09:32:38.831809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.524 [2024-07-15 09:32:38.891748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.331  Copying: 190/512 [MB] (190 MBps) Copying: 388/512 [MB] (198 MBps) Copying: 512/512 [MB] (average 195 MBps) 00:06:48.331 00:06:48.331 09:32:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:48.331 09:32:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:48.331 09:32:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:48.332 09:32:42 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.332 [2024-07-15 09:32:42.554386] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:48.332 [2024-07-15 09:32:42.554493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63789 ] 00:06:48.332 { 00:06:48.332 "subsystems": [ 00:06:48.332 { 00:06:48.332 "subsystem": "bdev", 00:06:48.332 "config": [ 00:06:48.332 { 00:06:48.332 "params": { 00:06:48.332 "block_size": 512, 00:06:48.332 "num_blocks": 1048576, 00:06:48.332 "name": "malloc0" 00:06:48.332 }, 00:06:48.332 "method": "bdev_malloc_create" 00:06:48.332 }, 00:06:48.332 { 00:06:48.332 "params": { 00:06:48.332 "block_size": 512, 00:06:48.332 "num_blocks": 1048576, 00:06:48.332 "name": "malloc1" 00:06:48.332 }, 00:06:48.332 "method": "bdev_malloc_create" 00:06:48.332 }, 00:06:48.332 { 00:06:48.332 "method": "bdev_wait_for_examine" 00:06:48.332 } 00:06:48.332 ] 00:06:48.332 } 00:06:48.332 ] 00:06:48.332 } 00:06:48.332 [2024-07-15 09:32:42.694994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.589 [2024-07-15 09:32:42.814813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.589 [2024-07-15 09:32:42.871625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.074  Copying: 195/512 [MB] (195 MBps) Copying: 392/512 [MB] (197 MBps) Copying: 512/512 [MB] (average 196 MBps) 00:06:52.074 00:06:52.074 ************************************ 00:06:52.074 END TEST dd_malloc_copy 00:06:52.074 ************************************ 00:06:52.074 00:06:52.074 real 0m7.937s 00:06:52.074 user 0m6.878s 00:06:52.074 sys 0m0.884s 00:06:52.074 09:32:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.074 09:32:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.074 09:32:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:06:52.074 00:06:52.074 real 0m8.068s 00:06:52.074 user 0m6.933s 00:06:52.074 sys 0m0.961s 00:06:52.074 ************************************ 00:06:52.074 END TEST spdk_dd_malloc 00:06:52.075 ************************************ 00:06:52.075 09:32:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.075 09:32:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:52.075 09:32:46 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:52.075 09:32:46 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:52.075 09:32:46 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:52.075 09:32:46 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.075 09:32:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:52.075 ************************************ 00:06:52.075 START TEST spdk_dd_bdev_to_bdev 00:06:52.075 ************************************ 00:06:52.075 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:52.333 * Looking for test storage... 00:06:52.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.333 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.334 ************************************ 00:06:52.334 START TEST dd_inflate_file 00:06:52.334 ************************************ 00:06:52.334 09:32:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:52.334 [2024-07-15 09:32:46.675955] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:52.334 [2024-07-15 09:32:46.676280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63895 ] 00:06:52.640 [2024-07-15 09:32:46.809522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.640 [2024-07-15 09:32:46.914231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.640 [2024-07-15 09:32:46.970936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.900  Copying: 64/64 [MB] (average 1729 MBps) 00:06:52.900 00:06:52.900 ************************************ 00:06:52.900 END TEST dd_inflate_file 00:06:52.900 ************************************ 00:06:52.900 00:06:52.900 real 0m0.628s 00:06:52.900 user 0m0.374s 00:06:52.900 sys 0m0.302s 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:52.900 ************************************ 00:06:52.900 START TEST dd_copy_to_out_bdev 00:06:52.900 ************************************ 00:06:52.900 09:32:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:52.900 { 00:06:52.900 "subsystems": [ 00:06:52.900 { 00:06:52.900 "subsystem": "bdev", 00:06:52.900 "config": [ 00:06:52.900 { 00:06:52.900 "params": { 00:06:52.900 "trtype": "pcie", 00:06:52.900 "traddr": "0000:00:10.0", 00:06:52.900 "name": "Nvme0" 00:06:52.900 }, 00:06:52.900 "method": "bdev_nvme_attach_controller" 00:06:52.900 }, 00:06:52.900 { 00:06:52.900 "params": { 00:06:52.900 "trtype": "pcie", 00:06:52.900 "traddr": "0000:00:11.0", 00:06:52.900 "name": "Nvme1" 00:06:52.900 }, 00:06:52.900 "method": "bdev_nvme_attach_controller" 00:06:52.900 }, 00:06:52.900 { 00:06:52.900 "method": "bdev_wait_for_examine" 00:06:52.900 } 00:06:52.900 ] 00:06:52.900 } 00:06:52.900 ] 00:06:52.900 } 00:06:52.900 [2024-07-15 09:32:47.357240] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:52.900 [2024-07-15 09:32:47.357360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63933 ] 00:06:53.157 [2024-07-15 09:32:47.495877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.158 [2024-07-15 09:32:47.597279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.415 [2024-07-15 09:32:47.653694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.046  Copying: 55/64 [MB] (55 MBps) Copying: 64/64 [MB] (average 54 MBps) 00:06:55.046 00:06:55.046 00:06:55.046 real 0m1.975s 00:06:55.046 user 0m1.732s 00:06:55.046 sys 0m1.539s 00:06:55.046 ************************************ 00:06:55.046 END TEST dd_copy_to_out_bdev 00:06:55.046 ************************************ 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:55.046 ************************************ 00:06:55.046 START TEST dd_offset_magic 00:06:55.046 ************************************ 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:55.046 09:32:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:55.046 [2024-07-15 09:32:49.381192] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:55.046 [2024-07-15 09:32:49.381312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63979 ] 00:06:55.046 { 00:06:55.046 "subsystems": [ 00:06:55.046 { 00:06:55.046 "subsystem": "bdev", 00:06:55.046 "config": [ 00:06:55.046 { 00:06:55.046 "params": { 00:06:55.046 "trtype": "pcie", 00:06:55.046 "traddr": "0000:00:10.0", 00:06:55.046 "name": "Nvme0" 00:06:55.046 }, 00:06:55.046 "method": "bdev_nvme_attach_controller" 00:06:55.046 }, 00:06:55.046 { 00:06:55.046 "params": { 00:06:55.046 "trtype": "pcie", 00:06:55.046 "traddr": "0000:00:11.0", 00:06:55.046 "name": "Nvme1" 00:06:55.046 }, 00:06:55.046 "method": "bdev_nvme_attach_controller" 00:06:55.046 }, 00:06:55.046 { 00:06:55.046 "method": "bdev_wait_for_examine" 00:06:55.046 } 00:06:55.046 ] 00:06:55.046 } 00:06:55.046 ] 00:06:55.046 } 00:06:55.304 [2024-07-15 09:32:49.519429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.304 [2024-07-15 09:32:49.620795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.304 [2024-07-15 09:32:49.676248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.819  Copying: 65/65 [MB] (average 928 MBps) 00:06:55.819 00:06:55.819 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:55.819 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:55.819 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:55.819 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:55.819 [2024-07-15 09:32:50.236607] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:55.819 [2024-07-15 09:32:50.236704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63996 ] 00:06:55.819 { 00:06:55.819 "subsystems": [ 00:06:55.819 { 00:06:55.819 "subsystem": "bdev", 00:06:55.819 "config": [ 00:06:55.819 { 00:06:55.819 "params": { 00:06:55.820 "trtype": "pcie", 00:06:55.820 "traddr": "0000:00:10.0", 00:06:55.820 "name": "Nvme0" 00:06:55.820 }, 00:06:55.820 "method": "bdev_nvme_attach_controller" 00:06:55.820 }, 00:06:55.820 { 00:06:55.820 "params": { 00:06:55.820 "trtype": "pcie", 00:06:55.820 "traddr": "0000:00:11.0", 00:06:55.820 "name": "Nvme1" 00:06:55.820 }, 00:06:55.820 "method": "bdev_nvme_attach_controller" 00:06:55.820 }, 00:06:55.820 { 00:06:55.820 "method": "bdev_wait_for_examine" 00:06:55.820 } 00:06:55.820 ] 00:06:55.820 } 00:06:55.820 ] 00:06:55.820 } 00:06:56.077 [2024-07-15 09:32:50.375058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.078 [2024-07-15 09:32:50.482918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.078 [2024-07-15 09:32:50.538458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.595  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:56.595 00:06:56.595 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:56.595 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:56.595 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:56.595 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:56.595 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:56.595 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:56.595 09:32:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:56.595 [2024-07-15 09:32:50.995827] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:56.595 [2024-07-15 09:32:50.995954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64013 ] 00:06:56.595 { 00:06:56.595 "subsystems": [ 00:06:56.595 { 00:06:56.595 "subsystem": "bdev", 00:06:56.595 "config": [ 00:06:56.595 { 00:06:56.595 "params": { 00:06:56.595 "trtype": "pcie", 00:06:56.595 "traddr": "0000:00:10.0", 00:06:56.595 "name": "Nvme0" 00:06:56.595 }, 00:06:56.595 "method": "bdev_nvme_attach_controller" 00:06:56.595 }, 00:06:56.595 { 00:06:56.595 "params": { 00:06:56.595 "trtype": "pcie", 00:06:56.595 "traddr": "0000:00:11.0", 00:06:56.595 "name": "Nvme1" 00:06:56.595 }, 00:06:56.595 "method": "bdev_nvme_attach_controller" 00:06:56.595 }, 00:06:56.595 { 00:06:56.595 "method": "bdev_wait_for_examine" 00:06:56.595 } 00:06:56.595 ] 00:06:56.595 } 00:06:56.595 ] 00:06:56.595 } 00:06:56.856 [2024-07-15 09:32:51.137063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.856 [2024-07-15 09:32:51.240099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.856 [2024-07-15 09:32:51.295478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.370  Copying: 65/65 [MB] (average 1015 MBps) 00:06:57.370 00:06:57.370 09:32:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:57.370 09:32:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:57.370 09:32:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:57.370 09:32:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:57.627 [2024-07-15 09:32:51.847630] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:57.627 [2024-07-15 09:32:51.847719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64033 ] 00:06:57.627 { 00:06:57.627 "subsystems": [ 00:06:57.627 { 00:06:57.627 "subsystem": "bdev", 00:06:57.627 "config": [ 00:06:57.627 { 00:06:57.627 "params": { 00:06:57.627 "trtype": "pcie", 00:06:57.627 "traddr": "0000:00:10.0", 00:06:57.627 "name": "Nvme0" 00:06:57.627 }, 00:06:57.627 "method": "bdev_nvme_attach_controller" 00:06:57.627 }, 00:06:57.627 { 00:06:57.627 "params": { 00:06:57.627 "trtype": "pcie", 00:06:57.627 "traddr": "0000:00:11.0", 00:06:57.627 "name": "Nvme1" 00:06:57.627 }, 00:06:57.627 "method": "bdev_nvme_attach_controller" 00:06:57.627 }, 00:06:57.627 { 00:06:57.627 "method": "bdev_wait_for_examine" 00:06:57.627 } 00:06:57.627 ] 00:06:57.627 } 00:06:57.627 ] 00:06:57.627 } 00:06:57.627 [2024-07-15 09:32:51.979975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.627 [2024-07-15 09:32:52.088841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.885 [2024-07-15 09:32:52.143762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.143  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:58.143 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:58.143 00:06:58.143 real 0m3.225s 00:06:58.143 user 0m2.364s 00:06:58.143 sys 0m0.927s 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.143 ************************************ 00:06:58.143 END TEST dd_offset_magic 00:06:58.143 ************************************ 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.143 09:32:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:58.401 [2024-07-15 09:32:52.650914] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:58.401 [2024-07-15 09:32:52.651006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64070 ] 00:06:58.401 { 00:06:58.401 "subsystems": [ 00:06:58.401 { 00:06:58.401 "subsystem": "bdev", 00:06:58.401 "config": [ 00:06:58.401 { 00:06:58.401 "params": { 00:06:58.401 "trtype": "pcie", 00:06:58.401 "traddr": "0000:00:10.0", 00:06:58.401 "name": "Nvme0" 00:06:58.401 }, 00:06:58.401 "method": "bdev_nvme_attach_controller" 00:06:58.401 }, 00:06:58.401 { 00:06:58.401 "params": { 00:06:58.401 "trtype": "pcie", 00:06:58.401 "traddr": "0000:00:11.0", 00:06:58.401 "name": "Nvme1" 00:06:58.401 }, 00:06:58.401 "method": "bdev_nvme_attach_controller" 00:06:58.401 }, 00:06:58.401 { 00:06:58.401 "method": "bdev_wait_for_examine" 00:06:58.401 } 00:06:58.401 ] 00:06:58.401 } 00:06:58.401 ] 00:06:58.401 } 00:06:58.401 [2024-07-15 09:32:52.789226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.659 [2024-07-15 09:32:52.903307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.659 [2024-07-15 09:32:52.957923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.923  Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:58.923 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:58.923 09:32:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.180 { 00:06:59.180 "subsystems": [ 00:06:59.180 { 00:06:59.180 "subsystem": "bdev", 00:06:59.180 "config": [ 00:06:59.180 { 00:06:59.180 "params": { 00:06:59.180 "trtype": "pcie", 00:06:59.180 "traddr": "0000:00:10.0", 00:06:59.180 "name": "Nvme0" 00:06:59.180 }, 00:06:59.180 "method": "bdev_nvme_attach_controller" 00:06:59.180 }, 00:06:59.180 { 00:06:59.180 "params": { 00:06:59.181 "trtype": "pcie", 00:06:59.181 "traddr": "0000:00:11.0", 00:06:59.181 "name": "Nvme1" 00:06:59.181 }, 00:06:59.181 "method": "bdev_nvme_attach_controller" 00:06:59.181 }, 00:06:59.181 { 00:06:59.181 "method": "bdev_wait_for_examine" 00:06:59.181 } 00:06:59.181 ] 00:06:59.181 } 00:06:59.181 ] 00:06:59.181 } 00:06:59.181 [2024-07-15 09:32:53.429048] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:59.181 [2024-07-15 09:32:53.429167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64091 ] 00:06:59.181 [2024-07-15 09:32:53.575156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.438 [2024-07-15 09:32:53.691734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.438 [2024-07-15 09:32:53.747029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.696  Copying: 5120/5120 [kB] (average 833 MBps) 00:06:59.696 00:06:59.696 09:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:59.955 ************************************ 00:06:59.955 END TEST spdk_dd_bdev_to_bdev 00:06:59.955 ************************************ 00:06:59.955 00:06:59.955 real 0m7.641s 00:06:59.955 user 0m5.663s 00:06:59.955 sys 0m3.480s 00:06:59.955 09:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.955 09:32:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:59.955 09:32:54 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:59.955 09:32:54 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:59.955 09:32:54 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:59.955 09:32:54 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.955 09:32:54 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.955 09:32:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.955 ************************************ 00:06:59.955 START TEST spdk_dd_uring 00:06:59.955 ************************************ 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:59.955 * Looking for test storage... 00:06:59.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.955 09:32:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:59.955 ************************************ 00:06:59.955 START TEST dd_uring_copy 00:06:59.955 ************************************ 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=ovuj1slv0lmlg9rh5tunm2brv6im1488q9ffbr1owo6e7aogm8d4yq1nj61t9y2txbh9lvhvmr71a2s35a20ce3lozpy6ed75a4a1r3h9z9k60rx1mmvycsawdkjg8br4h3xsl9acgcjcadu11nszp6ys5royp767fv9z4lu415xlwm6by5f1tt13ywb8941uqdyx7zm95andm19kpuf6z4r8hq9qhpmegfcfrk9wovg4sl859oojrf5mniube3fudevjatvf603okhsyg5besxy3l17g6ix0bbi6zc9ripnm7quivddj1vqv1jyuksnjhe0hkn739s7wyj8ll15qqs11smxgfd9vwbcwdfk6dnop9xq3q64rhk4fzlce9nn5qg6kwjibh65e3yfzbw4fj1fapz6958kdwyy9en3m0n7mp6mvhat8txa99exlro8vuut4hppm6b2m2o3kejkpebf1357b8u2wzurg3zlq645ndl8rqibqra3o6w5u9q2q9zp8qdma25ja54rb3t1b8b4znfifm1ljseo719x2l3xq2jutb6ogp0f0taz92rpfkg6vhkl0hoj8sy4j99bj5gcjx6of0qtjweq34484rweebicss34arnh0ufprhpq003c9t7ktgpo1dsrd5py7n4aepywvh0g255febf5pu4tveur4upgtd3w0h1plureo0tkn7120rdnv4yahkyb6ohhboqn32to2898x7xiqg9u98rhzm6nq7d0xa25fdnmgqmwy02y9cagcgp63z10l4y3oacuimweqi1o7mgul71ikvnboxwnknb7a2804n4ooffckv3h3yn6wmic9oxuorpsw2xbm9ihgr4a0ht8i1i1klqgzci8a90pjhbdbb41x34phitj6fhk86ajxdxvlhxda82z8jh910n9eeucw5ffsnav8l385a6hzt0csyypfqdzzikznwxf1jif8voywhyyxxb7e5ugyr7lk3r3lw69vc4aaiynmtyjkzueajv1 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo ovuj1slv0lmlg9rh5tunm2brv6im1488q9ffbr1owo6e7aogm8d4yq1nj61t9y2txbh9lvhvmr71a2s35a20ce3lozpy6ed75a4a1r3h9z9k60rx1mmvycsawdkjg8br4h3xsl9acgcjcadu11nszp6ys5royp767fv9z4lu415xlwm6by5f1tt13ywb8941uqdyx7zm95andm19kpuf6z4r8hq9qhpmegfcfrk9wovg4sl859oojrf5mniube3fudevjatvf603okhsyg5besxy3l17g6ix0bbi6zc9ripnm7quivddj1vqv1jyuksnjhe0hkn739s7wyj8ll15qqs11smxgfd9vwbcwdfk6dnop9xq3q64rhk4fzlce9nn5qg6kwjibh65e3yfzbw4fj1fapz6958kdwyy9en3m0n7mp6mvhat8txa99exlro8vuut4hppm6b2m2o3kejkpebf1357b8u2wzurg3zlq645ndl8rqibqra3o6w5u9q2q9zp8qdma25ja54rb3t1b8b4znfifm1ljseo719x2l3xq2jutb6ogp0f0taz92rpfkg6vhkl0hoj8sy4j99bj5gcjx6of0qtjweq34484rweebicss34arnh0ufprhpq003c9t7ktgpo1dsrd5py7n4aepywvh0g255febf5pu4tveur4upgtd3w0h1plureo0tkn7120rdnv4yahkyb6ohhboqn32to2898x7xiqg9u98rhzm6nq7d0xa25fdnmgqmwy02y9cagcgp63z10l4y3oacuimweqi1o7mgul71ikvnboxwnknb7a2804n4ooffckv3h3yn6wmic9oxuorpsw2xbm9ihgr4a0ht8i1i1klqgzci8a90pjhbdbb41x34phitj6fhk86ajxdxvlhxda82z8jh910n9eeucw5ffsnav8l385a6hzt0csyypfqdzzikznwxf1jif8voywhyyxxb7e5ugyr7lk3r3lw69vc4aaiynmtyjkzueajv1 00:06:59.956 09:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:59.956 [2024-07-15 09:32:54.385334] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:59.956 [2024-07-15 09:32:54.385849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64161 ] 00:07:00.213 [2024-07-15 09:32:54.522352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.214 [2024-07-15 09:32:54.646288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.472 [2024-07-15 09:32:54.704733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.603  Copying: 511/511 [MB] (average 1036 MBps) 00:07:01.603 00:07:01.603 09:32:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:01.603 09:32:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:01.603 09:32:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.603 09:32:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.603 [2024-07-15 09:32:55.926764] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:01.603 [2024-07-15 09:32:55.927452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64177 ] 00:07:01.603 { 00:07:01.603 "subsystems": [ 00:07:01.603 { 00:07:01.603 "subsystem": "bdev", 00:07:01.603 "config": [ 00:07:01.603 { 00:07:01.603 "params": { 00:07:01.603 "block_size": 512, 00:07:01.603 "num_blocks": 1048576, 00:07:01.603 "name": "malloc0" 00:07:01.603 }, 00:07:01.603 "method": "bdev_malloc_create" 00:07:01.603 }, 00:07:01.603 { 00:07:01.603 "params": { 00:07:01.603 "filename": "/dev/zram1", 00:07:01.603 "name": "uring0" 00:07:01.603 }, 00:07:01.603 "method": "bdev_uring_create" 00:07:01.603 }, 00:07:01.603 { 00:07:01.603 "method": "bdev_wait_for_examine" 00:07:01.603 } 00:07:01.603 ] 00:07:01.603 } 00:07:01.603 ] 00:07:01.603 } 00:07:01.603 [2024-07-15 09:32:56.067039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.861 [2024-07-15 09:32:56.173718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.861 [2024-07-15 09:32:56.230852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.725  Copying: 224/512 [MB] (224 MBps) Copying: 450/512 [MB] (225 MBps) Copying: 512/512 [MB] (average 224 MBps) 00:07:04.725 00:07:04.725 09:32:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:04.725 09:32:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:04.725 09:32:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:04.725 09:32:59 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.725 [2024-07-15 09:32:59.180406] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:04.725 [2024-07-15 09:32:59.180492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64221 ] 00:07:04.984 { 00:07:04.984 "subsystems": [ 00:07:04.984 { 00:07:04.984 "subsystem": "bdev", 00:07:04.984 "config": [ 00:07:04.984 { 00:07:04.984 "params": { 00:07:04.984 "block_size": 512, 00:07:04.984 "num_blocks": 1048576, 00:07:04.984 "name": "malloc0" 00:07:04.984 }, 00:07:04.984 "method": "bdev_malloc_create" 00:07:04.984 }, 00:07:04.984 { 00:07:04.984 "params": { 00:07:04.984 "filename": "/dev/zram1", 00:07:04.984 "name": "uring0" 00:07:04.984 }, 00:07:04.984 "method": "bdev_uring_create" 00:07:04.984 }, 00:07:04.984 { 00:07:04.984 "method": "bdev_wait_for_examine" 00:07:04.984 } 00:07:04.984 ] 00:07:04.984 } 00:07:04.984 ] 00:07:04.984 } 00:07:04.984 [2024-07-15 09:32:59.314556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.984 [2024-07-15 09:32:59.427677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.242 [2024-07-15 09:32:59.483780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.069  Copying: 169/512 [MB] (169 MBps) Copying: 328/512 [MB] (159 MBps) Copying: 490/512 [MB] (161 MBps) Copying: 512/512 [MB] (average 163 MBps) 00:07:09.069 00:07:09.069 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:09.070 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ ovuj1slv0lmlg9rh5tunm2brv6im1488q9ffbr1owo6e7aogm8d4yq1nj61t9y2txbh9lvhvmr71a2s35a20ce3lozpy6ed75a4a1r3h9z9k60rx1mmvycsawdkjg8br4h3xsl9acgcjcadu11nszp6ys5royp767fv9z4lu415xlwm6by5f1tt13ywb8941uqdyx7zm95andm19kpuf6z4r8hq9qhpmegfcfrk9wovg4sl859oojrf5mniube3fudevjatvf603okhsyg5besxy3l17g6ix0bbi6zc9ripnm7quivddj1vqv1jyuksnjhe0hkn739s7wyj8ll15qqs11smxgfd9vwbcwdfk6dnop9xq3q64rhk4fzlce9nn5qg6kwjibh65e3yfzbw4fj1fapz6958kdwyy9en3m0n7mp6mvhat8txa99exlro8vuut4hppm6b2m2o3kejkpebf1357b8u2wzurg3zlq645ndl8rqibqra3o6w5u9q2q9zp8qdma25ja54rb3t1b8b4znfifm1ljseo719x2l3xq2jutb6ogp0f0taz92rpfkg6vhkl0hoj8sy4j99bj5gcjx6of0qtjweq34484rweebicss34arnh0ufprhpq003c9t7ktgpo1dsrd5py7n4aepywvh0g255febf5pu4tveur4upgtd3w0h1plureo0tkn7120rdnv4yahkyb6ohhboqn32to2898x7xiqg9u98rhzm6nq7d0xa25fdnmgqmwy02y9cagcgp63z10l4y3oacuimweqi1o7mgul71ikvnboxwnknb7a2804n4ooffckv3h3yn6wmic9oxuorpsw2xbm9ihgr4a0ht8i1i1klqgzci8a90pjhbdbb41x34phitj6fhk86ajxdxvlhxda82z8jh910n9eeucw5ffsnav8l385a6hzt0csyypfqdzzikznwxf1jif8voywhyyxxb7e5ugyr7lk3r3lw69vc4aaiynmtyjkzueajv1 == \o\v\u\j\1\s\l\v\0\l\m\l\g\9\r\h\5\t\u\n\m\2\b\r\v\6\i\m\1\4\8\8\q\9\f\f\b\r\1\o\w\o\6\e\7\a\o\g\m\8\d\4\y\q\1\n\j\6\1\t\9\y\2\t\x\b\h\9\l\v\h\v\m\r\7\1\a\2\s\3\5\a\2\0\c\e\3\l\o\z\p\y\6\e\d\7\5\a\4\a\1\r\3\h\9\z\9\k\6\0\r\x\1\m\m\v\y\c\s\a\w\d\k\j\g\8\b\r\4\h\3\x\s\l\9\a\c\g\c\j\c\a\d\u\1\1\n\s\z\p\6\y\s\5\r\o\y\p\7\6\7\f\v\9\z\4\l\u\4\1\5\x\l\w\m\6\b\y\5\f\1\t\t\1\3\y\w\b\8\9\4\1\u\q\d\y\x\7\z\m\9\5\a\n\d\m\1\9\k\p\u\f\6\z\4\r\8\h\q\9\q\h\p\m\e\g\f\c\f\r\k\9\w\o\v\g\4\s\l\8\5\9\o\o\j\r\f\5\m\n\i\u\b\e\3\f\u\d\e\v\j\a\t\v\f\6\0\3\o\k\h\s\y\g\5\b\e\s\x\y\3\l\1\7\g\6\i\x\0\b\b\i\6\z\c\9\r\i\p\n\m\7\q\u\i\v\d\d\j\1\v\q\v\1\j\y\u\k\s\n\j\h\e\0\h\k\n\7\3\9\s\7\w\y\j\8\l\l\1\5\q\q\s\1\1\s\m\x\g\f\d\9\v\w\b\c\w\d\f\k\6\d\n\o\p\9\x\q\3\q\6\4\r\h\k\4\f\z\l\c\e\9\n\n\5\q\g\6\k\w\j\i\b\h\6\5\e\3\y\f\z\b\w\4\f\j\1\f\a\p\z\6\9\5\8\k\d\w\y\y\9\e\n\3\m\0\n\7\m\p\6\m\v\h\a\t\8\t\x\a\9\9\e\x\l\r\o\8\v\u\u\t\4\h\p\p\m\6\b\2\m\2\o\3\k\e\j\k\p\e\b\f\1\3\5\7\b\8\u\2\w\z\u\r\g\3\z\l\q\6\4\5\n\d\l\8\r\q\i\b\q\r\a\3\o\6\w\5\u\9\q\2\q\9\z\p\8\q\d\m\a\2\5\j\a\5\4\r\b\3\t\1\b\8\b\4\z\n\f\i\f\m\1\l\j\s\e\o\7\1\9\x\2\l\3\x\q\2\j\u\t\b\6\o\g\p\0\f\0\t\a\z\9\2\r\p\f\k\g\6\v\h\k\l\0\h\o\j\8\s\y\4\j\9\9\b\j\5\g\c\j\x\6\o\f\0\q\t\j\w\e\q\3\4\4\8\4\r\w\e\e\b\i\c\s\s\3\4\a\r\n\h\0\u\f\p\r\h\p\q\0\0\3\c\9\t\7\k\t\g\p\o\1\d\s\r\d\5\p\y\7\n\4\a\e\p\y\w\v\h\0\g\2\5\5\f\e\b\f\5\p\u\4\t\v\e\u\r\4\u\p\g\t\d\3\w\0\h\1\p\l\u\r\e\o\0\t\k\n\7\1\2\0\r\d\n\v\4\y\a\h\k\y\b\6\o\h\h\b\o\q\n\3\2\t\o\2\8\9\8\x\7\x\i\q\g\9\u\9\8\r\h\z\m\6\n\q\7\d\0\x\a\2\5\f\d\n\m\g\q\m\w\y\0\2\y\9\c\a\g\c\g\p\6\3\z\1\0\l\4\y\3\o\a\c\u\i\m\w\e\q\i\1\o\7\m\g\u\l\7\1\i\k\v\n\b\o\x\w\n\k\n\b\7\a\2\8\0\4\n\4\o\o\f\f\c\k\v\3\h\3\y\n\6\w\m\i\c\9\o\x\u\o\r\p\s\w\2\x\b\m\9\i\h\g\r\4\a\0\h\t\8\i\1\i\1\k\l\q\g\z\c\i\8\a\9\0\p\j\h\b\d\b\b\4\1\x\3\4\p\h\i\t\j\6\f\h\k\8\6\a\j\x\d\x\v\l\h\x\d\a\8\2\z\8\j\h\9\1\0\n\9\e\e\u\c\w\5\f\f\s\n\a\v\8\l\3\8\5\a\6\h\z\t\0\c\s\y\y\p\f\q\d\z\z\i\k\z\n\w\x\f\1\j\i\f\8\v\o\y\w\h\y\y\x\x\b\7\e\5\u\g\y\r\7\l\k\3\r\3\l\w\6\9\v\c\4\a\a\i\y\n\m\t\y\j\k\z\u\e\a\j\v\1 ]] 00:07:09.070 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:09.070 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ ovuj1slv0lmlg9rh5tunm2brv6im1488q9ffbr1owo6e7aogm8d4yq1nj61t9y2txbh9lvhvmr71a2s35a20ce3lozpy6ed75a4a1r3h9z9k60rx1mmvycsawdkjg8br4h3xsl9acgcjcadu11nszp6ys5royp767fv9z4lu415xlwm6by5f1tt13ywb8941uqdyx7zm95andm19kpuf6z4r8hq9qhpmegfcfrk9wovg4sl859oojrf5mniube3fudevjatvf603okhsyg5besxy3l17g6ix0bbi6zc9ripnm7quivddj1vqv1jyuksnjhe0hkn739s7wyj8ll15qqs11smxgfd9vwbcwdfk6dnop9xq3q64rhk4fzlce9nn5qg6kwjibh65e3yfzbw4fj1fapz6958kdwyy9en3m0n7mp6mvhat8txa99exlro8vuut4hppm6b2m2o3kejkpebf1357b8u2wzurg3zlq645ndl8rqibqra3o6w5u9q2q9zp8qdma25ja54rb3t1b8b4znfifm1ljseo719x2l3xq2jutb6ogp0f0taz92rpfkg6vhkl0hoj8sy4j99bj5gcjx6of0qtjweq34484rweebicss34arnh0ufprhpq003c9t7ktgpo1dsrd5py7n4aepywvh0g255febf5pu4tveur4upgtd3w0h1plureo0tkn7120rdnv4yahkyb6ohhboqn32to2898x7xiqg9u98rhzm6nq7d0xa25fdnmgqmwy02y9cagcgp63z10l4y3oacuimweqi1o7mgul71ikvnboxwnknb7a2804n4ooffckv3h3yn6wmic9oxuorpsw2xbm9ihgr4a0ht8i1i1klqgzci8a90pjhbdbb41x34phitj6fhk86ajxdxvlhxda82z8jh910n9eeucw5ffsnav8l385a6hzt0csyypfqdzzikznwxf1jif8voywhyyxxb7e5ugyr7lk3r3lw69vc4aaiynmtyjkzueajv1 == \o\v\u\j\1\s\l\v\0\l\m\l\g\9\r\h\5\t\u\n\m\2\b\r\v\6\i\m\1\4\8\8\q\9\f\f\b\r\1\o\w\o\6\e\7\a\o\g\m\8\d\4\y\q\1\n\j\6\1\t\9\y\2\t\x\b\h\9\l\v\h\v\m\r\7\1\a\2\s\3\5\a\2\0\c\e\3\l\o\z\p\y\6\e\d\7\5\a\4\a\1\r\3\h\9\z\9\k\6\0\r\x\1\m\m\v\y\c\s\a\w\d\k\j\g\8\b\r\4\h\3\x\s\l\9\a\c\g\c\j\c\a\d\u\1\1\n\s\z\p\6\y\s\5\r\o\y\p\7\6\7\f\v\9\z\4\l\u\4\1\5\x\l\w\m\6\b\y\5\f\1\t\t\1\3\y\w\b\8\9\4\1\u\q\d\y\x\7\z\m\9\5\a\n\d\m\1\9\k\p\u\f\6\z\4\r\8\h\q\9\q\h\p\m\e\g\f\c\f\r\k\9\w\o\v\g\4\s\l\8\5\9\o\o\j\r\f\5\m\n\i\u\b\e\3\f\u\d\e\v\j\a\t\v\f\6\0\3\o\k\h\s\y\g\5\b\e\s\x\y\3\l\1\7\g\6\i\x\0\b\b\i\6\z\c\9\r\i\p\n\m\7\q\u\i\v\d\d\j\1\v\q\v\1\j\y\u\k\s\n\j\h\e\0\h\k\n\7\3\9\s\7\w\y\j\8\l\l\1\5\q\q\s\1\1\s\m\x\g\f\d\9\v\w\b\c\w\d\f\k\6\d\n\o\p\9\x\q\3\q\6\4\r\h\k\4\f\z\l\c\e\9\n\n\5\q\g\6\k\w\j\i\b\h\6\5\e\3\y\f\z\b\w\4\f\j\1\f\a\p\z\6\9\5\8\k\d\w\y\y\9\e\n\3\m\0\n\7\m\p\6\m\v\h\a\t\8\t\x\a\9\9\e\x\l\r\o\8\v\u\u\t\4\h\p\p\m\6\b\2\m\2\o\3\k\e\j\k\p\e\b\f\1\3\5\7\b\8\u\2\w\z\u\r\g\3\z\l\q\6\4\5\n\d\l\8\r\q\i\b\q\r\a\3\o\6\w\5\u\9\q\2\q\9\z\p\8\q\d\m\a\2\5\j\a\5\4\r\b\3\t\1\b\8\b\4\z\n\f\i\f\m\1\l\j\s\e\o\7\1\9\x\2\l\3\x\q\2\j\u\t\b\6\o\g\p\0\f\0\t\a\z\9\2\r\p\f\k\g\6\v\h\k\l\0\h\o\j\8\s\y\4\j\9\9\b\j\5\g\c\j\x\6\o\f\0\q\t\j\w\e\q\3\4\4\8\4\r\w\e\e\b\i\c\s\s\3\4\a\r\n\h\0\u\f\p\r\h\p\q\0\0\3\c\9\t\7\k\t\g\p\o\1\d\s\r\d\5\p\y\7\n\4\a\e\p\y\w\v\h\0\g\2\5\5\f\e\b\f\5\p\u\4\t\v\e\u\r\4\u\p\g\t\d\3\w\0\h\1\p\l\u\r\e\o\0\t\k\n\7\1\2\0\r\d\n\v\4\y\a\h\k\y\b\6\o\h\h\b\o\q\n\3\2\t\o\2\8\9\8\x\7\x\i\q\g\9\u\9\8\r\h\z\m\6\n\q\7\d\0\x\a\2\5\f\d\n\m\g\q\m\w\y\0\2\y\9\c\a\g\c\g\p\6\3\z\1\0\l\4\y\3\o\a\c\u\i\m\w\e\q\i\1\o\7\m\g\u\l\7\1\i\k\v\n\b\o\x\w\n\k\n\b\7\a\2\8\0\4\n\4\o\o\f\f\c\k\v\3\h\3\y\n\6\w\m\i\c\9\o\x\u\o\r\p\s\w\2\x\b\m\9\i\h\g\r\4\a\0\h\t\8\i\1\i\1\k\l\q\g\z\c\i\8\a\9\0\p\j\h\b\d\b\b\4\1\x\3\4\p\h\i\t\j\6\f\h\k\8\6\a\j\x\d\x\v\l\h\x\d\a\8\2\z\8\j\h\9\1\0\n\9\e\e\u\c\w\5\f\f\s\n\a\v\8\l\3\8\5\a\6\h\z\t\0\c\s\y\y\p\f\q\d\z\z\i\k\z\n\w\x\f\1\j\i\f\8\v\o\y\w\h\y\y\x\x\b\7\e\5\u\g\y\r\7\l\k\3\r\3\l\w\6\9\v\c\4\a\a\i\y\n\m\t\y\j\k\z\u\e\a\j\v\1 ]] 00:07:09.070 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:09.328 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:09.328 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:09.328 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:09.328 09:33:03 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.328 [2024-07-15 09:33:03.665133] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:09.328 [2024-07-15 09:33:03.665223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64299 ] 00:07:09.328 { 00:07:09.328 "subsystems": [ 00:07:09.328 { 00:07:09.328 "subsystem": "bdev", 00:07:09.328 "config": [ 00:07:09.328 { 00:07:09.328 "params": { 00:07:09.328 "block_size": 512, 00:07:09.328 "num_blocks": 1048576, 00:07:09.328 "name": "malloc0" 00:07:09.328 }, 00:07:09.328 "method": "bdev_malloc_create" 00:07:09.328 }, 00:07:09.328 { 00:07:09.328 "params": { 00:07:09.328 "filename": "/dev/zram1", 00:07:09.328 "name": "uring0" 00:07:09.328 }, 00:07:09.328 "method": "bdev_uring_create" 00:07:09.328 }, 00:07:09.328 { 00:07:09.328 "method": "bdev_wait_for_examine" 00:07:09.328 } 00:07:09.328 ] 00:07:09.328 } 00:07:09.328 ] 00:07:09.328 } 00:07:09.589 [2024-07-15 09:33:03.796439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.589 [2024-07-15 09:33:03.910136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.589 [2024-07-15 09:33:03.965359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.669  Copying: 146/512 [MB] (146 MBps) Copying: 297/512 [MB] (150 MBps) Copying: 446/512 [MB] (149 MBps) Copying: 512/512 [MB] (average 149 MBps) 00:07:13.669 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.669 09:33:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.669 [2024-07-15 09:33:08.089967] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:13.669 [2024-07-15 09:33:08.090117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64355 ] 00:07:13.669 { 00:07:13.669 "subsystems": [ 00:07:13.669 { 00:07:13.669 "subsystem": "bdev", 00:07:13.669 "config": [ 00:07:13.669 { 00:07:13.669 "params": { 00:07:13.669 "block_size": 512, 00:07:13.669 "num_blocks": 1048576, 00:07:13.669 "name": "malloc0" 00:07:13.669 }, 00:07:13.669 "method": "bdev_malloc_create" 00:07:13.669 }, 00:07:13.669 { 00:07:13.669 "params": { 00:07:13.669 "filename": "/dev/zram1", 00:07:13.669 "name": "uring0" 00:07:13.669 }, 00:07:13.669 "method": "bdev_uring_create" 00:07:13.669 }, 00:07:13.669 { 00:07:13.669 "params": { 00:07:13.669 "name": "uring0" 00:07:13.669 }, 00:07:13.669 "method": "bdev_uring_delete" 00:07:13.669 }, 00:07:13.669 { 00:07:13.669 "method": "bdev_wait_for_examine" 00:07:13.669 } 00:07:13.669 ] 00:07:13.669 } 00:07:13.669 ] 00:07:13.669 } 00:07:13.927 [2024-07-15 09:33:08.336915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.186 [2024-07-15 09:33:08.455319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.186 [2024-07-15 09:33:08.513165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.703  Copying: 0/0 [B] (average 0 Bps) 00:07:14.703 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:14.703 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:14.962 09:33:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:14.962 [2024-07-15 09:33:09.221826] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:14.962 [2024-07-15 09:33:09.221941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64392 ] 00:07:14.962 { 00:07:14.962 "subsystems": [ 00:07:14.962 { 00:07:14.962 "subsystem": "bdev", 00:07:14.962 "config": [ 00:07:14.962 { 00:07:14.962 "params": { 00:07:14.962 "block_size": 512, 00:07:14.962 "num_blocks": 1048576, 00:07:14.962 "name": "malloc0" 00:07:14.962 }, 00:07:14.962 "method": "bdev_malloc_create" 00:07:14.962 }, 00:07:14.962 { 00:07:14.962 "params": { 00:07:14.962 "filename": "/dev/zram1", 00:07:14.962 "name": "uring0" 00:07:14.962 }, 00:07:14.962 "method": "bdev_uring_create" 00:07:14.962 }, 00:07:14.962 { 00:07:14.962 "params": { 00:07:14.962 "name": "uring0" 00:07:14.962 }, 00:07:14.962 "method": "bdev_uring_delete" 00:07:14.962 }, 00:07:14.962 { 00:07:14.962 "method": "bdev_wait_for_examine" 00:07:14.962 } 00:07:14.962 ] 00:07:14.962 } 00:07:14.962 ] 00:07:14.962 } 00:07:14.962 [2024-07-15 09:33:09.359965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.221 [2024-07-15 09:33:09.480471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.221 [2024-07-15 09:33:09.536137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.479 [2024-07-15 09:33:09.742287] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:15.479 [2024-07-15 09:33:09.742347] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:15.479 [2024-07-15 09:33:09.742360] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:15.479 [2024-07-15 09:33:09.742371] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.737 [2024-07-15 09:33:10.050607] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:15.737 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:15.994 00:07:15.994 real 0m16.123s 00:07:15.994 user 0m10.937s 00:07:15.994 sys 0m13.263s 00:07:15.994 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.994 ************************************ 00:07:15.994 END TEST dd_uring_copy 00:07:15.994 09:33:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.994 ************************************ 00:07:16.253 09:33:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:16.253 00:07:16.253 real 0m16.253s 00:07:16.253 user 0m10.983s 00:07:16.253 sys 0m13.348s 00:07:16.253 09:33:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.253 ************************************ 00:07:16.253 END TEST spdk_dd_uring 00:07:16.253 ************************************ 00:07:16.253 09:33:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:16.253 09:33:10 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:16.253 09:33:10 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:16.253 09:33:10 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.253 09:33:10 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.253 09:33:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:16.253 ************************************ 00:07:16.253 START TEST spdk_dd_sparse 00:07:16.253 ************************************ 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:16.253 * Looking for test storage... 00:07:16.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.253 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:16.254 1+0 records in 00:07:16.254 1+0 records out 00:07:16.254 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00572126 s, 733 MB/s 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:16.254 1+0 records in 00:07:16.254 1+0 records out 00:07:16.254 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0068381 s, 613 MB/s 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:16.254 1+0 records in 00:07:16.254 1+0 records out 00:07:16.254 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0063551 s, 660 MB/s 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:16.254 ************************************ 00:07:16.254 START TEST dd_sparse_file_to_file 00:07:16.254 ************************************ 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:16.254 09:33:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:16.254 [2024-07-15 09:33:10.710193] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:16.254 [2024-07-15 09:33:10.710345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64479 ] 00:07:16.254 { 00:07:16.254 "subsystems": [ 00:07:16.254 { 00:07:16.254 "subsystem": "bdev", 00:07:16.254 "config": [ 00:07:16.254 { 00:07:16.254 "params": { 00:07:16.254 "block_size": 4096, 00:07:16.254 "filename": "dd_sparse_aio_disk", 00:07:16.254 "name": "dd_aio" 00:07:16.254 }, 00:07:16.254 "method": "bdev_aio_create" 00:07:16.254 }, 00:07:16.254 { 00:07:16.254 "params": { 00:07:16.254 "lvs_name": "dd_lvstore", 00:07:16.254 "bdev_name": "dd_aio" 00:07:16.254 }, 00:07:16.254 "method": "bdev_lvol_create_lvstore" 00:07:16.254 }, 00:07:16.254 { 00:07:16.254 "method": "bdev_wait_for_examine" 00:07:16.254 } 00:07:16.254 ] 00:07:16.254 } 00:07:16.254 ] 00:07:16.254 } 00:07:16.517 [2024-07-15 09:33:10.851377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.517 [2024-07-15 09:33:10.966384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.807 [2024-07-15 09:33:11.020255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.065  Copying: 12/36 [MB] (average 1200 MBps) 00:07:17.065 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:17.065 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:17.066 00:07:17.066 real 0m0.734s 00:07:17.066 user 0m0.469s 00:07:17.066 sys 0m0.344s 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:17.066 ************************************ 00:07:17.066 END TEST dd_sparse_file_to_file 00:07:17.066 ************************************ 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:17.066 ************************************ 00:07:17.066 START TEST dd_sparse_file_to_bdev 00:07:17.066 ************************************ 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:17.066 09:33:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:17.066 [2024-07-15 09:33:11.485399] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:17.066 [2024-07-15 09:33:11.485483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64527 ] 00:07:17.066 { 00:07:17.066 "subsystems": [ 00:07:17.066 { 00:07:17.066 "subsystem": "bdev", 00:07:17.066 "config": [ 00:07:17.066 { 00:07:17.066 "params": { 00:07:17.066 "block_size": 4096, 00:07:17.066 "filename": "dd_sparse_aio_disk", 00:07:17.066 "name": "dd_aio" 00:07:17.066 }, 00:07:17.066 "method": "bdev_aio_create" 00:07:17.066 }, 00:07:17.066 { 00:07:17.066 "params": { 00:07:17.066 "lvs_name": "dd_lvstore", 00:07:17.066 "lvol_name": "dd_lvol", 00:07:17.066 "size_in_mib": 36, 00:07:17.066 "thin_provision": true 00:07:17.066 }, 00:07:17.066 "method": "bdev_lvol_create" 00:07:17.066 }, 00:07:17.066 { 00:07:17.066 "method": "bdev_wait_for_examine" 00:07:17.066 } 00:07:17.066 ] 00:07:17.066 } 00:07:17.066 ] 00:07:17.066 } 00:07:17.324 [2024-07-15 09:33:11.620092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.324 [2024-07-15 09:33:11.735981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.324 [2024-07-15 09:33:11.789637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:17.840  Copying: 12/36 [MB] (average 521 MBps) 00:07:17.840 00:07:17.840 00:07:17.840 real 0m0.692s 00:07:17.840 user 0m0.455s 00:07:17.840 sys 0m0.333s 00:07:17.840 ************************************ 00:07:17.840 END TEST dd_sparse_file_to_bdev 00:07:17.840 ************************************ 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:17.840 ************************************ 00:07:17.840 START TEST dd_sparse_bdev_to_file 00:07:17.840 ************************************ 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:17.840 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:17.840 { 00:07:17.840 "subsystems": [ 00:07:17.840 { 00:07:17.840 "subsystem": "bdev", 00:07:17.840 "config": [ 00:07:17.840 { 00:07:17.840 "params": { 00:07:17.840 "block_size": 4096, 00:07:17.840 "filename": "dd_sparse_aio_disk", 00:07:17.840 "name": "dd_aio" 00:07:17.840 }, 00:07:17.840 "method": "bdev_aio_create" 00:07:17.840 }, 00:07:17.840 { 00:07:17.840 "method": "bdev_wait_for_examine" 00:07:17.840 } 00:07:17.840 ] 00:07:17.840 } 00:07:17.840 ] 00:07:17.840 } 00:07:17.840 [2024-07-15 09:33:12.236289] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:17.840 [2024-07-15 09:33:12.236385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64560 ] 00:07:18.098 [2024-07-15 09:33:12.375282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.098 [2024-07-15 09:33:12.493941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.098 [2024-07-15 09:33:12.547714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:18.615  Copying: 12/36 [MB] (average 923 MBps) 00:07:18.615 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:18.615 00:07:18.615 real 0m0.712s 00:07:18.615 user 0m0.463s 00:07:18.615 sys 0m0.342s 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:18.615 ************************************ 00:07:18.615 END TEST dd_sparse_bdev_to_file 00:07:18.615 ************************************ 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:18.615 00:07:18.615 real 0m2.431s 00:07:18.615 user 0m1.478s 00:07:18.615 sys 0m1.218s 00:07:18.615 ************************************ 00:07:18.615 END TEST spdk_dd_sparse 00:07:18.615 ************************************ 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.615 09:33:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:18.615 09:33:12 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:18.615 09:33:12 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:18.615 09:33:12 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.615 09:33:12 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.615 09:33:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:18.615 ************************************ 00:07:18.615 START TEST spdk_dd_negative 00:07:18.615 ************************************ 00:07:18.615 09:33:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:18.615 * Looking for test storage... 00:07:18.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:18.874 09:33:13 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.875 ************************************ 00:07:18.875 START TEST dd_invalid_arguments 00:07:18.875 ************************************ 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.875 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:18.875 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:18.875 00:07:18.875 CPU options: 00:07:18.875 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:18.875 (like [0,1,10]) 00:07:18.875 --lcores lcore to CPU mapping list. The list is in the format: 00:07:18.875 [<,lcores[@CPUs]>...] 00:07:18.875 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:18.875 Within the group, '-' is used for range separator, 00:07:18.875 ',' is used for single number separator. 00:07:18.875 '( )' can be omitted for single element group, 00:07:18.875 '@' can be omitted if cpus and lcores have the same value 00:07:18.875 --disable-cpumask-locks Disable CPU core lock files. 00:07:18.875 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:18.875 pollers in the app support interrupt mode) 00:07:18.875 -p, --main-core main (primary) core for DPDK 00:07:18.875 00:07:18.875 Configuration options: 00:07:18.875 -c, --config, --json JSON config file 00:07:18.875 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:18.875 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:18.875 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:18.875 --rpcs-allowed comma-separated list of permitted RPCS 00:07:18.875 --json-ignore-init-errors don't exit on invalid config entry 00:07:18.875 00:07:18.875 Memory options: 00:07:18.875 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:18.875 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:18.875 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:18.875 -R, --huge-unlink unlink huge files after initialization 00:07:18.875 -n, --mem-channels number of memory channels used for DPDK 00:07:18.875 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:18.875 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:18.875 --no-huge run without using hugepages 00:07:18.875 -i, --shm-id shared memory ID (optional) 00:07:18.875 -g, --single-file-segments force creating just one hugetlbfs file 00:07:18.875 00:07:18.875 PCI options: 00:07:18.875 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:18.875 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:18.875 -u, --no-pci disable PCI access 00:07:18.875 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:18.875 00:07:18.875 Log options: 00:07:18.875 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:18.875 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:18.875 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:18.875 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:18.875 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:18.875 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:18.875 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:18.875 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:18.875 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:18.875 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:18.875 virtio_vfio_user, vmd) 00:07:18.875 --silence-noticelog disable notice level logging to stderr 00:07:18.875 00:07:18.875 Trace options: 00:07:18.875 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:18.875 setting 0 to disable trace (default 32768) 00:07:18.875 Tracepoints vary in size and can use more than one trace entry. 00:07:18.875 -e, --tpoint-group [:] 00:07:18.875 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:18.875 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:18.875 [2024-07-15 09:33:13.157455] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:18.875 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:18.875 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:18.875 a tracepoint group. First tpoint inside a group can be enabled by 00:07:18.875 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:18.875 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:18.875 in /include/spdk_internal/trace_defs.h 00:07:18.875 00:07:18.875 Other options: 00:07:18.875 -h, --help show this usage 00:07:18.875 -v, --version print SPDK version 00:07:18.875 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:18.875 --env-context Opaque context for use of the env implementation 00:07:18.875 00:07:18.875 Application specific: 00:07:18.875 [--------- DD Options ---------] 00:07:18.875 --if Input file. Must specify either --if or --ib. 00:07:18.875 --ib Input bdev. Must specifier either --if or --ib 00:07:18.875 --of Output file. Must specify either --of or --ob. 00:07:18.876 --ob Output bdev. Must specify either --of or --ob. 00:07:18.876 --iflag Input file flags. 00:07:18.876 --oflag Output file flags. 00:07:18.876 --bs I/O unit size (default: 4096) 00:07:18.876 --qd Queue depth (default: 2) 00:07:18.876 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:18.876 --skip Skip this many I/O units at start of input. (default: 0) 00:07:18.876 --seek Skip this many I/O units at start of output. (default: 0) 00:07:18.876 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:18.876 --sparse Enable hole skipping in input target 00:07:18.876 Available iflag and oflag values: 00:07:18.876 append - append mode 00:07:18.876 direct - use direct I/O for data 00:07:18.876 directory - fail unless a directory 00:07:18.876 dsync - use synchronized I/O for data 00:07:18.876 noatime - do not update access time 00:07:18.876 noctty - do not assign controlling terminal from file 00:07:18.876 nofollow - do not follow symlinks 00:07:18.876 nonblock - use non-blocking I/O 00:07:18.876 sync - use synchronized I/O for data and metadata 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.876 00:07:18.876 real 0m0.076s 00:07:18.876 user 0m0.051s 00:07:18.876 sys 0m0.024s 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.876 ************************************ 00:07:18.876 END TEST dd_invalid_arguments 00:07:18.876 ************************************ 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:18.876 ************************************ 00:07:18.876 START TEST dd_double_input 00:07:18.876 ************************************ 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:18.876 [2024-07-15 09:33:13.283714] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.876 00:07:18.876 real 0m0.073s 00:07:18.876 user 0m0.040s 00:07:18.876 sys 0m0.032s 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:18.876 ************************************ 00:07:18.876 END TEST dd_double_input 00:07:18.876 ************************************ 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.876 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.134 ************************************ 00:07:19.134 START TEST dd_double_output 00:07:19.134 ************************************ 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:19.134 [2024-07-15 09:33:13.402675] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.134 00:07:19.134 real 0m0.074s 00:07:19.134 user 0m0.046s 00:07:19.134 sys 0m0.027s 00:07:19.134 ************************************ 00:07:19.134 END TEST dd_double_output 00:07:19.134 ************************************ 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.134 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.135 ************************************ 00:07:19.135 START TEST dd_no_input 00:07:19.135 ************************************ 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:19.135 [2024-07-15 09:33:13.530047] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.135 00:07:19.135 real 0m0.074s 00:07:19.135 user 0m0.041s 00:07:19.135 sys 0m0.032s 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:19.135 ************************************ 00:07:19.135 END TEST dd_no_input 00:07:19.135 ************************************ 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.135 ************************************ 00:07:19.135 START TEST dd_no_output 00:07:19.135 ************************************ 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.135 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:19.393 [2024-07-15 09:33:13.650573] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.393 00:07:19.393 real 0m0.071s 00:07:19.393 user 0m0.043s 00:07:19.393 sys 0m0.027s 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:19.393 ************************************ 00:07:19.393 END TEST dd_no_output 00:07:19.393 ************************************ 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.393 ************************************ 00:07:19.393 START TEST dd_wrong_blocksize 00:07:19.393 ************************************ 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:19.393 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:19.394 [2024-07-15 09:33:13.766780] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.394 00:07:19.394 real 0m0.063s 00:07:19.394 user 0m0.036s 00:07:19.394 sys 0m0.026s 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:19.394 ************************************ 00:07:19.394 END TEST dd_wrong_blocksize 00:07:19.394 ************************************ 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:19.394 ************************************ 00:07:19.394 START TEST dd_smaller_blocksize 00:07:19.394 ************************************ 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:19.394 09:33:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:19.652 [2024-07-15 09:33:13.876395] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:19.652 [2024-07-15 09:33:13.876484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64785 ] 00:07:19.652 [2024-07-15 09:33:14.009785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.911 [2024-07-15 09:33:14.138405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.911 [2024-07-15 09:33:14.194363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.169 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:20.169 [2024-07-15 09:33:14.518438] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:20.169 [2024-07-15 09:33:14.518523] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.169 [2024-07-15 09:33:14.633615] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.473 00:07:20.473 real 0m0.923s 00:07:20.473 user 0m0.445s 00:07:20.473 sys 0m0.371s 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:20.473 ************************************ 00:07:20.473 END TEST dd_smaller_blocksize 00:07:20.473 ************************************ 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.473 ************************************ 00:07:20.473 START TEST dd_invalid_count 00:07:20.473 ************************************ 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:20.473 [2024-07-15 09:33:14.854166] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:20.473 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.474 00:07:20.474 real 0m0.071s 00:07:20.474 user 0m0.046s 00:07:20.474 sys 0m0.025s 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:20.474 ************************************ 00:07:20.474 END TEST dd_invalid_count 00:07:20.474 ************************************ 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.474 ************************************ 00:07:20.474 START TEST dd_invalid_oflag 00:07:20.474 ************************************ 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.474 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:20.732 [2024-07-15 09:33:14.980240] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:20.732 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:20.732 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.732 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.732 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.732 00:07:20.732 real 0m0.078s 00:07:20.732 user 0m0.052s 00:07:20.732 sys 0m0.026s 00:07:20.732 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.732 09:33:14 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:20.732 ************************************ 00:07:20.732 END TEST dd_invalid_oflag 00:07:20.732 ************************************ 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.732 ************************************ 00:07:20.732 START TEST dd_invalid_iflag 00:07:20.732 ************************************ 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:20.732 [2024-07-15 09:33:15.102841] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.732 00:07:20.732 real 0m0.080s 00:07:20.732 user 0m0.046s 00:07:20.732 sys 0m0.034s 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.732 ************************************ 00:07:20.732 END TEST dd_invalid_iflag 00:07:20.732 ************************************ 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:20.732 ************************************ 00:07:20.732 START TEST dd_unknown_flag 00:07:20.732 ************************************ 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:20.732 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:20.990 [2024-07-15 09:33:15.236499] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:20.991 [2024-07-15 09:33:15.236606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64877 ] 00:07:20.991 [2024-07-15 09:33:15.375485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.248 [2024-07-15 09:33:15.486783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.248 [2024-07-15 09:33:15.538798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:21.248 [2024-07-15 09:33:15.573078] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:21.248 [2024-07-15 09:33:15.573142] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.248 [2024-07-15 09:33:15.573224] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:21.248 [2024-07-15 09:33:15.573245] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.248 [2024-07-15 09:33:15.573577] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:21.248 [2024-07-15 09:33:15.573624] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.248 [2024-07-15 09:33:15.573694] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:21.248 [2024-07-15 09:33:15.573712] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:21.248 [2024-07-15 09:33:15.685259] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.505 00:07:21.505 real 0m0.611s 00:07:21.505 user 0m0.364s 00:07:21.505 sys 0m0.156s 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:21.505 ************************************ 00:07:21.505 END TEST dd_unknown_flag 00:07:21.505 ************************************ 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:21.505 09:33:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:21.506 ************************************ 00:07:21.506 START TEST dd_invalid_json 00:07:21.506 ************************************ 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.506 09:33:15 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:21.506 [2024-07-15 09:33:15.886642] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:21.506 [2024-07-15 09:33:15.886750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64911 ] 00:07:21.763 [2024-07-15 09:33:16.026117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.763 [2024-07-15 09:33:16.141187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.763 [2024-07-15 09:33:16.141264] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:21.763 [2024-07-15 09:33:16.141282] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:21.763 [2024-07-15 09:33:16.141292] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.763 [2024-07-15 09:33:16.141344] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.021 00:07:22.021 real 0m0.407s 00:07:22.021 user 0m0.235s 00:07:22.021 sys 0m0.070s 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:22.021 ************************************ 00:07:22.021 END TEST dd_invalid_json 00:07:22.021 ************************************ 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:22.021 ************************************ 00:07:22.021 END TEST spdk_dd_negative 00:07:22.021 ************************************ 00:07:22.021 00:07:22.021 real 0m3.278s 00:07:22.021 user 0m1.668s 00:07:22.021 sys 0m1.265s 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.021 09:33:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:22.021 09:33:16 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:22.021 00:07:22.021 real 1m22.018s 00:07:22.021 user 0m54.079s 00:07:22.021 sys 0m34.771s 00:07:22.021 09:33:16 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.021 09:33:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:22.021 ************************************ 00:07:22.021 END TEST spdk_dd 00:07:22.021 ************************************ 00:07:22.021 09:33:16 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.021 09:33:16 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:22.021 09:33:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:22.021 09:33:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:22.021 09:33:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.021 09:33:16 -- common/autotest_common.sh@10 -- # set +x 00:07:22.021 09:33:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:22.021 09:33:16 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:22.021 09:33:16 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:22.021 09:33:16 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:22.021 09:33:16 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:22.021 09:33:16 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:22.021 09:33:16 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.021 09:33:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.021 09:33:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.021 09:33:16 -- common/autotest_common.sh@10 -- # set +x 00:07:22.021 ************************************ 00:07:22.021 START TEST nvmf_tcp 00:07:22.021 ************************************ 00:07:22.021 09:33:16 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.021 * Looking for test storage... 00:07:22.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:22.021 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:22.021 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.279 09:33:16 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.279 09:33:16 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.279 09:33:16 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.279 09:33:16 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.279 09:33:16 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.279 09:33:16 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.279 09:33:16 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:22.279 09:33:16 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:22.279 09:33:16 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.279 09:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:22.279 09:33:16 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.279 09:33:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.279 09:33:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.279 09:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.279 ************************************ 00:07:22.279 START TEST nvmf_host_management 00:07:22.279 ************************************ 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:22.279 * Looking for test storage... 00:07:22.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.279 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:22.280 Cannot find device "nvmf_init_br" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:22.280 Cannot find device "nvmf_tgt_br" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:22.280 Cannot find device "nvmf_tgt_br2" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:22.280 Cannot find device "nvmf_init_br" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:22.280 Cannot find device "nvmf_tgt_br" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:22.280 Cannot find device "nvmf_tgt_br2" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:22.280 Cannot find device "nvmf_br" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:22.280 Cannot find device "nvmf_init_if" 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:22.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:22.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:22.280 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:22.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:07:22.538 00:07:22.538 --- 10.0.0.2 ping statistics --- 00:07:22.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.538 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:22.538 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:22.538 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:07:22.538 00:07:22.538 --- 10.0.0.3 ping statistics --- 00:07:22.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.538 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:22.538 09:33:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:22.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:22.538 00:07:22.538 --- 10.0.0.1 ping statistics --- 00:07:22.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.538 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65175 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65175 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65175 ']' 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.795 09:33:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.795 [2024-07-15 09:33:17.088562] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:22.795 [2024-07-15 09:33:17.088651] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.795 [2024-07-15 09:33:17.225176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.129 [2024-07-15 09:33:17.346062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.129 [2024-07-15 09:33:17.346346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.129 [2024-07-15 09:33:17.346509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.129 [2024-07-15 09:33:17.346632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.129 [2024-07-15 09:33:17.346668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.129 [2024-07-15 09:33:17.346940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.129 [2024-07-15 09:33:17.346987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.129 [2024-07-15 09:33:17.347039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:23.129 [2024-07-15 09:33:17.347040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.129 [2024-07-15 09:33:17.402315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.729 [2024-07-15 09:33:18.055648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.729 Malloc0 00:07:23.729 [2024-07-15 09:33:18.127723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.729 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65229 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65229 /var/tmp/bdevperf.sock 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65229 ']' 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:23.730 { 00:07:23.730 "params": { 00:07:23.730 "name": "Nvme$subsystem", 00:07:23.730 "trtype": "$TEST_TRANSPORT", 00:07:23.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:23.730 "adrfam": "ipv4", 00:07:23.730 "trsvcid": "$NVMF_PORT", 00:07:23.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:23.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:23.730 "hdgst": ${hdgst:-false}, 00:07:23.730 "ddgst": ${ddgst:-false} 00:07:23.730 }, 00:07:23.730 "method": "bdev_nvme_attach_controller" 00:07:23.730 } 00:07:23.730 EOF 00:07:23.730 )") 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:23.730 09:33:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:23.730 "params": { 00:07:23.730 "name": "Nvme0", 00:07:23.730 "trtype": "tcp", 00:07:23.730 "traddr": "10.0.0.2", 00:07:23.730 "adrfam": "ipv4", 00:07:23.730 "trsvcid": "4420", 00:07:23.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:23.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:23.730 "hdgst": false, 00:07:23.730 "ddgst": false 00:07:23.730 }, 00:07:23.730 "method": "bdev_nvme_attach_controller" 00:07:23.730 }' 00:07:23.986 [2024-07-15 09:33:18.229154] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:23.986 [2024-07-15 09:33:18.229243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65229 ] 00:07:23.986 [2024-07-15 09:33:18.365659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.243 [2024-07-15 09:33:18.498020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.243 [2024-07-15 09:33:18.561020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.243 Running I/O for 10 seconds... 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=713 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 713 -ge 100 ']' 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.811 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:24.812 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.812 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.812 09:33:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.812 [2024-07-15 09:33:19.269574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 09:33:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:24.812 [2024-07-15 09:33:19.269628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.269978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.269990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.812 [2024-07-15 09:33:19.270490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.812 [2024-07-15 09:33:19.270501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.270983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.270994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:24.813 [2024-07-15 09:33:19.271302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732ec0 is same with the state(5) to be set 00:07:24.813 [2024-07-15 09:33:19.271393] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x732ec0 was disconnected and freed. reset controller. 00:07:24.813 [2024-07-15 09:33:19.271580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.813 [2024-07-15 09:33:19.271602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.813 [2024-07-15 09:33:19.271627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.813 [2024-07-15 09:33:19.271649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:24.813 [2024-07-15 09:33:19.271671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:24.813 [2024-07-15 09:33:19.271682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ad50 is same with the state(5) to be set 00:07:24.814 [2024-07-15 09:33:19.272784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:24.814 task offset: 106496 on job bdev=Nvme0n1 fails 00:07:24.814 00:07:24.814 Latency(us) 00:07:24.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.814 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:24.814 Job: Nvme0n1 ended in about 0.59 seconds with error 00:07:24.814 Verification LBA range: start 0x0 length 0x400 00:07:24.814 Nvme0n1 : 0.59 1414.24 88.39 108.79 0.00 40787.63 2621.44 40513.16 00:07:24.814 =================================================================================================================== 00:07:24.814 Total : 1414.24 88.39 108.79 0.00 40787.63 2621.44 40513.16 00:07:24.814 [2024-07-15 09:33:19.275457] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:24.814 [2024-07-15 09:33:19.275514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72ad50 (9): Bad file descriptor 00:07:25.071 [2024-07-15 09:33:19.280184] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65229 00:07:26.005 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65229) - No such process 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:26.005 { 00:07:26.005 "params": { 00:07:26.005 "name": "Nvme$subsystem", 00:07:26.005 "trtype": "$TEST_TRANSPORT", 00:07:26.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.005 "adrfam": "ipv4", 00:07:26.005 "trsvcid": "$NVMF_PORT", 00:07:26.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.005 "hdgst": ${hdgst:-false}, 00:07:26.005 "ddgst": ${ddgst:-false} 00:07:26.005 }, 00:07:26.005 "method": "bdev_nvme_attach_controller" 00:07:26.005 } 00:07:26.005 EOF 00:07:26.005 )") 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:26.005 09:33:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:26.005 "params": { 00:07:26.005 "name": "Nvme0", 00:07:26.005 "trtype": "tcp", 00:07:26.005 "traddr": "10.0.0.2", 00:07:26.005 "adrfam": "ipv4", 00:07:26.005 "trsvcid": "4420", 00:07:26.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.006 "hdgst": false, 00:07:26.006 "ddgst": false 00:07:26.006 }, 00:07:26.006 "method": "bdev_nvme_attach_controller" 00:07:26.006 }' 00:07:26.006 [2024-07-15 09:33:20.324947] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:26.006 [2024-07-15 09:33:20.325688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65267 ] 00:07:26.006 [2024-07-15 09:33:20.456882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.264 [2024-07-15 09:33:20.601165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.264 [2024-07-15 09:33:20.663440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.522 Running I/O for 1 seconds... 00:07:27.455 00:07:27.455 Latency(us) 00:07:27.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.455 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:27.455 Verification LBA range: start 0x0 length 0x400 00:07:27.455 Nvme0n1 : 1.01 1522.95 95.18 0.00 0.00 41197.22 4706.68 37891.72 00:07:27.455 =================================================================================================================== 00:07:27.455 Total : 1522.95 95.18 0.00 0.00 41197.22 4706.68 37891.72 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.713 rmmod nvme_tcp 00:07:27.713 rmmod nvme_fabrics 00:07:27.713 rmmod nvme_keyring 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65175 ']' 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65175 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65175 ']' 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65175 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65175 00:07:27.713 killing process with pid 65175 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65175' 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65175 00:07:27.713 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65175 00:07:27.972 [2024-07-15 09:33:22.397425] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.972 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.231 09:33:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:28.231 09:33:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:28.231 00:07:28.231 real 0m5.939s 00:07:28.231 user 0m22.781s 00:07:28.231 sys 0m1.527s 00:07:28.231 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.231 ************************************ 00:07:28.231 END TEST nvmf_host_management 00:07:28.231 09:33:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.231 ************************************ 00:07:28.231 09:33:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:28.231 09:33:22 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.231 09:33:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.231 09:33:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.231 09:33:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.231 ************************************ 00:07:28.231 START TEST nvmf_lvol 00:07:28.231 ************************************ 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.231 * Looking for test storage... 00:07:28.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.231 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:28.232 Cannot find device "nvmf_tgt_br" 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:28.232 Cannot find device "nvmf_tgt_br2" 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:28.232 Cannot find device "nvmf_tgt_br" 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:28.232 Cannot find device "nvmf_tgt_br2" 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:28.232 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:28.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:28.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:28.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:07:28.490 00:07:28.490 --- 10.0.0.2 ping statistics --- 00:07:28.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.490 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:28.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:28.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:07:28.490 00:07:28.490 --- 10.0.0.3 ping statistics --- 00:07:28.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.490 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:28.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:28.490 00:07:28.490 --- 10.0.0.1 ping statistics --- 00:07:28.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.490 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:28.490 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65486 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65486 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65486 ']' 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.491 09:33:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:28.749 [2024-07-15 09:33:22.970747] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:28.749 [2024-07-15 09:33:22.970836] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.749 [2024-07-15 09:33:23.107219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.007 [2024-07-15 09:33:23.223809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.007 [2024-07-15 09:33:23.224075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.007 [2024-07-15 09:33:23.224236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.007 [2024-07-15 09:33:23.224388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.007 [2024-07-15 09:33:23.224425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.007 [2024-07-15 09:33:23.224748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.007 [2024-07-15 09:33:23.224876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.007 [2024-07-15 09:33:23.224878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.007 [2024-07-15 09:33:23.278333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.573 09:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.573 09:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:29.573 09:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.573 09:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.573 09:33:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.573 09:33:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.573 09:33:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.830 [2024-07-15 09:33:24.203275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.830 09:33:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.088 09:33:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.088 09:33:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.346 09:33:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:30.346 09:33:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:30.604 09:33:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:30.863 09:33:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=212e39fb-7c14-46d8-9de1-ebc1c3f14093 00:07:30.863 09:33:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 212e39fb-7c14-46d8-9de1-ebc1c3f14093 lvol 20 00:07:31.121 09:33:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f222913e-5e25-4964-a339-1a37cbf8f0d4 00:07:31.121 09:33:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.379 09:33:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f222913e-5e25-4964-a339-1a37cbf8f0d4 00:07:31.637 09:33:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.894 [2024-07-15 09:33:26.214580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.894 09:33:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.153 09:33:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65567 00:07:32.153 09:33:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:32.153 09:33:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:33.086 09:33:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f222913e-5e25-4964-a339-1a37cbf8f0d4 MY_SNAPSHOT 00:07:33.345 09:33:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3cb8c534-9456-4db6-8b1a-b39f6c293cd8 00:07:33.345 09:33:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f222913e-5e25-4964-a339-1a37cbf8f0d4 30 00:07:33.909 09:33:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 3cb8c534-9456-4db6-8b1a-b39f6c293cd8 MY_CLONE 00:07:34.167 09:33:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=12340c63-7471-43dd-8d98-f24216d51f37 00:07:34.167 09:33:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 12340c63-7471-43dd-8d98-f24216d51f37 00:07:34.425 09:33:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65567 00:07:42.534 Initializing NVMe Controllers 00:07:42.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.534 Controller IO queue size 128, less than required. 00:07:42.534 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.534 Initialization complete. Launching workers. 00:07:42.534 ======================================================== 00:07:42.534 Latency(us) 00:07:42.534 Device Information : IOPS MiB/s Average min max 00:07:42.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10151.00 39.65 12610.17 3648.68 69287.19 00:07:42.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10248.20 40.03 12492.23 1543.05 93845.55 00:07:42.534 ======================================================== 00:07:42.534 Total : 20399.20 79.68 12550.92 1543.05 93845.55 00:07:42.534 00:07:42.534 09:33:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.792 09:33:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f222913e-5e25-4964-a339-1a37cbf8f0d4 00:07:43.050 09:33:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 212e39fb-7c14-46d8-9de1-ebc1c3f14093 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.308 rmmod nvme_tcp 00:07:43.308 rmmod nvme_fabrics 00:07:43.308 rmmod nvme_keyring 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65486 ']' 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65486 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65486 ']' 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65486 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65486 00:07:43.308 killing process with pid 65486 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65486' 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65486 00:07:43.308 09:33:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65486 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.566 09:33:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:43.824 ************************************ 00:07:43.824 END TEST nvmf_lvol 00:07:43.824 ************************************ 00:07:43.824 00:07:43.824 real 0m15.553s 00:07:43.824 user 1m4.788s 00:07:43.824 sys 0m4.228s 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.824 09:33:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:43.824 09:33:38 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:43.824 09:33:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.824 09:33:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.824 09:33:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.824 ************************************ 00:07:43.824 START TEST nvmf_lvs_grow 00:07:43.824 ************************************ 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:43.824 * Looking for test storage... 00:07:43.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.824 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:43.825 Cannot find device "nvmf_tgt_br" 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.825 Cannot find device "nvmf_tgt_br2" 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:43.825 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:44.082 Cannot find device "nvmf_tgt_br" 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:44.082 Cannot find device "nvmf_tgt_br2" 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:44.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:44.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:44.082 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:44.340 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:44.340 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:44.340 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:44.340 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:44.340 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:44.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:44.340 00:07:44.340 --- 10.0.0.2 ping statistics --- 00:07:44.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.340 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:44.340 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:44.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:44.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:44.340 00:07:44.340 --- 10.0.0.3 ping statistics --- 00:07:44.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.340 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:44.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:07:44.341 00:07:44.341 --- 10.0.0.1 ping statistics --- 00:07:44.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.341 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65888 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65888 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65888 ']' 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.341 09:33:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.341 [2024-07-15 09:33:38.689418] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:44.341 [2024-07-15 09:33:38.689519] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.599 [2024-07-15 09:33:38.831703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.599 [2024-07-15 09:33:38.951252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.599 [2024-07-15 09:33:38.951312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.599 [2024-07-15 09:33:38.951339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.599 [2024-07-15 09:33:38.951347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.599 [2024-07-15 09:33:38.951354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.599 [2024-07-15 09:33:38.951380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.599 [2024-07-15 09:33:39.007361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:45.577 [2024-07-15 09:33:39.955002] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.577 ************************************ 00:07:45.577 START TEST lvs_grow_clean 00:07:45.577 ************************************ 00:07:45.577 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:45.578 09:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.836 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:45.836 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:46.093 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:07:46.093 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:07:46.093 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:46.351 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:46.351 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:46.351 09:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 lvol 150 00:07:46.610 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9da24ea-e10d-41f2-a719-7747c4968622 00:07:46.610 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:46.610 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:46.869 [2024-07-15 09:33:41.242774] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:46.869 [2024-07-15 09:33:41.242876] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:46.869 true 00:07:46.869 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:07:46.869 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:47.127 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:47.127 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.385 09:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9da24ea-e10d-41f2-a719-7747c4968622 00:07:47.643 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:47.900 [2024-07-15 09:33:42.215363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.900 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65976 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65976 /var/tmp/bdevperf.sock 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 65976 ']' 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.212 09:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:48.212 [2024-07-15 09:33:42.526241] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:48.212 [2024-07-15 09:33:42.526344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65976 ] 00:07:48.470 [2024-07-15 09:33:42.668812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.470 [2024-07-15 09:33:42.797217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.470 [2024-07-15 09:33:42.855846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.035 09:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.035 09:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:49.035 09:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.293 Nvme0n1 00:07:49.293 09:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:49.551 [ 00:07:49.551 { 00:07:49.551 "name": "Nvme0n1", 00:07:49.551 "aliases": [ 00:07:49.551 "e9da24ea-e10d-41f2-a719-7747c4968622" 00:07:49.551 ], 00:07:49.551 "product_name": "NVMe disk", 00:07:49.551 "block_size": 4096, 00:07:49.551 "num_blocks": 38912, 00:07:49.551 "uuid": "e9da24ea-e10d-41f2-a719-7747c4968622", 00:07:49.551 "assigned_rate_limits": { 00:07:49.551 "rw_ios_per_sec": 0, 00:07:49.551 "rw_mbytes_per_sec": 0, 00:07:49.551 "r_mbytes_per_sec": 0, 00:07:49.551 "w_mbytes_per_sec": 0 00:07:49.551 }, 00:07:49.551 "claimed": false, 00:07:49.551 "zoned": false, 00:07:49.551 "supported_io_types": { 00:07:49.551 "read": true, 00:07:49.551 "write": true, 00:07:49.551 "unmap": true, 00:07:49.551 "flush": true, 00:07:49.551 "reset": true, 00:07:49.551 "nvme_admin": true, 00:07:49.551 "nvme_io": true, 00:07:49.551 "nvme_io_md": false, 00:07:49.551 "write_zeroes": true, 00:07:49.551 "zcopy": false, 00:07:49.551 "get_zone_info": false, 00:07:49.551 "zone_management": false, 00:07:49.551 "zone_append": false, 00:07:49.551 "compare": true, 00:07:49.551 "compare_and_write": true, 00:07:49.551 "abort": true, 00:07:49.551 "seek_hole": false, 00:07:49.551 "seek_data": false, 00:07:49.551 "copy": true, 00:07:49.551 "nvme_iov_md": false 00:07:49.551 }, 00:07:49.551 "memory_domains": [ 00:07:49.551 { 00:07:49.551 "dma_device_id": "system", 00:07:49.551 "dma_device_type": 1 00:07:49.551 } 00:07:49.551 ], 00:07:49.551 "driver_specific": { 00:07:49.551 "nvme": [ 00:07:49.551 { 00:07:49.551 "trid": { 00:07:49.551 "trtype": "TCP", 00:07:49.551 "adrfam": "IPv4", 00:07:49.551 "traddr": "10.0.0.2", 00:07:49.551 "trsvcid": "4420", 00:07:49.551 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:49.551 }, 00:07:49.551 "ctrlr_data": { 00:07:49.552 "cntlid": 1, 00:07:49.552 "vendor_id": "0x8086", 00:07:49.552 "model_number": "SPDK bdev Controller", 00:07:49.552 "serial_number": "SPDK0", 00:07:49.552 "firmware_revision": "24.09", 00:07:49.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.552 "oacs": { 00:07:49.552 "security": 0, 00:07:49.552 "format": 0, 00:07:49.552 "firmware": 0, 00:07:49.552 "ns_manage": 0 00:07:49.552 }, 00:07:49.552 "multi_ctrlr": true, 00:07:49.552 "ana_reporting": false 00:07:49.552 }, 00:07:49.552 "vs": { 00:07:49.552 "nvme_version": "1.3" 00:07:49.552 }, 00:07:49.552 "ns_data": { 00:07:49.552 "id": 1, 00:07:49.552 "can_share": true 00:07:49.552 } 00:07:49.552 } 00:07:49.552 ], 00:07:49.552 "mp_policy": "active_passive" 00:07:49.552 } 00:07:49.552 } 00:07:49.552 ] 00:07:49.552 09:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65994 00:07:49.552 09:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.552 09:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:49.810 Running I/O for 10 seconds... 00:07:50.744 Latency(us) 00:07:50.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.744 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:50.744 =================================================================================================================== 00:07:50.744 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:07:50.744 00:07:51.677 09:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:07:51.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.677 Nvme0n1 : 2.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:07:51.677 =================================================================================================================== 00:07:51.677 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:07:51.677 00:07:51.935 true 00:07:51.935 09:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:07:51.935 09:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.194 09:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.194 09:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.194 09:33:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65994 00:07:52.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.760 Nvme0n1 : 3.00 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:07:52.760 =================================================================================================================== 00:07:52.760 Total : 7196.67 28.11 0.00 0.00 0.00 0.00 0.00 00:07:52.760 00:07:53.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.695 Nvme0n1 : 4.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:53.695 =================================================================================================================== 00:07:53.695 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:53.695 00:07:55.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.068 Nvme0n1 : 5.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:55.068 =================================================================================================================== 00:07:55.068 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:55.068 00:07:56.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.001 Nvme0n1 : 6.00 7217.83 28.19 0.00 0.00 0.00 0.00 0.00 00:07:56.001 =================================================================================================================== 00:07:56.001 Total : 7217.83 28.19 0.00 0.00 0.00 0.00 0.00 00:07:56.001 00:07:56.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.647 Nvme0n1 : 7.00 7220.86 28.21 0.00 0.00 0.00 0.00 0.00 00:07:56.647 =================================================================================================================== 00:07:56.647 Total : 7220.86 28.21 0.00 0.00 0.00 0.00 0.00 00:07:56.647 00:07:58.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.018 Nvme0n1 : 8.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:58.018 =================================================================================================================== 00:07:58.018 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:07:58.018 00:07:58.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.952 Nvme0n1 : 9.00 7224.89 28.22 0.00 0.00 0.00 0.00 0.00 00:07:58.952 =================================================================================================================== 00:07:58.952 Total : 7224.89 28.22 0.00 0.00 0.00 0.00 0.00 00:07:58.952 00:07:59.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.886 Nvme0n1 : 10.00 7188.20 28.08 0.00 0.00 0.00 0.00 0.00 00:07:59.886 =================================================================================================================== 00:07:59.886 Total : 7188.20 28.08 0.00 0.00 0.00 0.00 0.00 00:07:59.886 00:07:59.886 00:07:59.886 Latency(us) 00:07:59.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.886 Nvme0n1 : 10.01 7190.44 28.09 0.00 0.00 17794.31 14239.19 45517.73 00:07:59.886 =================================================================================================================== 00:07:59.886 Total : 7190.44 28.09 0.00 0.00 17794.31 14239.19 45517.73 00:07:59.886 0 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65976 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 65976 ']' 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 65976 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65976 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:59.886 killing process with pid 65976 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65976' 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 65976 00:07:59.886 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.886 00:07:59.886 Latency(us) 00:07:59.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.886 =================================================================================================================== 00:07:59.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.886 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 65976 00:08:00.144 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.403 09:33:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.661 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:08:00.661 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:00.920 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:00.920 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:00.920 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.486 [2024-07-15 09:33:55.654750] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:01.486 09:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:08:01.744 request: 00:08:01.744 { 00:08:01.744 "uuid": "f0fa265f-a6fc-4779-bf47-9b15eaa89708", 00:08:01.744 "method": "bdev_lvol_get_lvstores", 00:08:01.744 "req_id": 1 00:08:01.744 } 00:08:01.744 Got JSON-RPC error response 00:08:01.744 response: 00:08:01.744 { 00:08:01.744 "code": -19, 00:08:01.744 "message": "No such device" 00:08:01.744 } 00:08:01.744 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:01.744 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.744 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.744 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.745 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.021 aio_bdev 00:08:02.021 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9da24ea-e10d-41f2-a719-7747c4968622 00:08:02.021 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=e9da24ea-e10d-41f2-a719-7747c4968622 00:08:02.021 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:02.021 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:02.021 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:02.021 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:02.021 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.279 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e9da24ea-e10d-41f2-a719-7747c4968622 -t 2000 00:08:02.537 [ 00:08:02.537 { 00:08:02.537 "name": "e9da24ea-e10d-41f2-a719-7747c4968622", 00:08:02.537 "aliases": [ 00:08:02.537 "lvs/lvol" 00:08:02.537 ], 00:08:02.537 "product_name": "Logical Volume", 00:08:02.537 "block_size": 4096, 00:08:02.537 "num_blocks": 38912, 00:08:02.537 "uuid": "e9da24ea-e10d-41f2-a719-7747c4968622", 00:08:02.537 "assigned_rate_limits": { 00:08:02.537 "rw_ios_per_sec": 0, 00:08:02.537 "rw_mbytes_per_sec": 0, 00:08:02.537 "r_mbytes_per_sec": 0, 00:08:02.537 "w_mbytes_per_sec": 0 00:08:02.537 }, 00:08:02.537 "claimed": false, 00:08:02.537 "zoned": false, 00:08:02.537 "supported_io_types": { 00:08:02.537 "read": true, 00:08:02.537 "write": true, 00:08:02.537 "unmap": true, 00:08:02.537 "flush": false, 00:08:02.537 "reset": true, 00:08:02.537 "nvme_admin": false, 00:08:02.537 "nvme_io": false, 00:08:02.537 "nvme_io_md": false, 00:08:02.537 "write_zeroes": true, 00:08:02.537 "zcopy": false, 00:08:02.537 "get_zone_info": false, 00:08:02.537 "zone_management": false, 00:08:02.537 "zone_append": false, 00:08:02.537 "compare": false, 00:08:02.537 "compare_and_write": false, 00:08:02.537 "abort": false, 00:08:02.537 "seek_hole": true, 00:08:02.537 "seek_data": true, 00:08:02.537 "copy": false, 00:08:02.537 "nvme_iov_md": false 00:08:02.537 }, 00:08:02.537 "driver_specific": { 00:08:02.537 "lvol": { 00:08:02.537 "lvol_store_uuid": "f0fa265f-a6fc-4779-bf47-9b15eaa89708", 00:08:02.537 "base_bdev": "aio_bdev", 00:08:02.537 "thin_provision": false, 00:08:02.537 "num_allocated_clusters": 38, 00:08:02.537 "snapshot": false, 00:08:02.537 "clone": false, 00:08:02.537 "esnap_clone": false 00:08:02.537 } 00:08:02.537 } 00:08:02.537 } 00:08:02.537 ] 00:08:02.537 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:02.537 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:08:02.537 09:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.796 09:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:03.055 09:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:03.055 09:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:08:03.314 09:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:03.314 09:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e9da24ea-e10d-41f2-a719-7747c4968622 00:08:03.572 09:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0fa265f-a6fc-4779-bf47-9b15eaa89708 00:08:03.838 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.097 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.354 ************************************ 00:08:04.354 END TEST lvs_grow_clean 00:08:04.354 ************************************ 00:08:04.354 00:08:04.354 real 0m18.810s 00:08:04.354 user 0m17.534s 00:08:04.354 sys 0m2.663s 00:08:04.354 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.354 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:04.613 ************************************ 00:08:04.613 START TEST lvs_grow_dirty 00:08:04.613 ************************************ 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:04.613 09:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.870 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:04.870 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:05.127 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=631505fa-a455-4a20-a350-018f178e7816 00:08:05.127 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:05.127 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:05.383 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:05.383 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:05.383 09:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 631505fa-a455-4a20-a350-018f178e7816 lvol 150 00:08:05.640 09:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a44eb354-76b7-433b-b149-c7be025ae04c 00:08:05.640 09:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:05.640 09:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:05.897 [2024-07-15 09:34:00.360923] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:05.897 [2024-07-15 09:34:00.361059] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:06.156 true 00:08:06.156 09:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:06.156 09:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:06.414 09:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:06.414 09:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:06.672 09:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a44eb354-76b7-433b-b149-c7be025ae04c 00:08:07.238 09:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:07.496 [2024-07-15 09:34:01.705606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.496 09:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:07.753 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66256 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66256 /var/tmp/bdevperf.sock 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66256 ']' 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.754 09:34:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.754 [2024-07-15 09:34:02.185369] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:07.754 [2024-07-15 09:34:02.185889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66256 ] 00:08:08.030 [2024-07-15 09:34:02.331597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.287 [2024-07-15 09:34:02.523761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.287 [2024-07-15 09:34:02.616127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.852 09:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.852 09:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:08.852 09:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:09.417 Nvme0n1 00:08:09.417 09:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:09.675 [ 00:08:09.675 { 00:08:09.675 "name": "Nvme0n1", 00:08:09.675 "aliases": [ 00:08:09.675 "a44eb354-76b7-433b-b149-c7be025ae04c" 00:08:09.675 ], 00:08:09.675 "product_name": "NVMe disk", 00:08:09.675 "block_size": 4096, 00:08:09.675 "num_blocks": 38912, 00:08:09.675 "uuid": "a44eb354-76b7-433b-b149-c7be025ae04c", 00:08:09.675 "assigned_rate_limits": { 00:08:09.675 "rw_ios_per_sec": 0, 00:08:09.675 "rw_mbytes_per_sec": 0, 00:08:09.675 "r_mbytes_per_sec": 0, 00:08:09.675 "w_mbytes_per_sec": 0 00:08:09.675 }, 00:08:09.675 "claimed": false, 00:08:09.675 "zoned": false, 00:08:09.675 "supported_io_types": { 00:08:09.675 "read": true, 00:08:09.675 "write": true, 00:08:09.675 "unmap": true, 00:08:09.675 "flush": true, 00:08:09.675 "reset": true, 00:08:09.675 "nvme_admin": true, 00:08:09.675 "nvme_io": true, 00:08:09.675 "nvme_io_md": false, 00:08:09.675 "write_zeroes": true, 00:08:09.675 "zcopy": false, 00:08:09.675 "get_zone_info": false, 00:08:09.675 "zone_management": false, 00:08:09.675 "zone_append": false, 00:08:09.675 "compare": true, 00:08:09.675 "compare_and_write": true, 00:08:09.675 "abort": true, 00:08:09.675 "seek_hole": false, 00:08:09.675 "seek_data": false, 00:08:09.675 "copy": true, 00:08:09.675 "nvme_iov_md": false 00:08:09.675 }, 00:08:09.675 "memory_domains": [ 00:08:09.675 { 00:08:09.675 "dma_device_id": "system", 00:08:09.675 "dma_device_type": 1 00:08:09.675 } 00:08:09.675 ], 00:08:09.675 "driver_specific": { 00:08:09.675 "nvme": [ 00:08:09.675 { 00:08:09.675 "trid": { 00:08:09.675 "trtype": "TCP", 00:08:09.675 "adrfam": "IPv4", 00:08:09.675 "traddr": "10.0.0.2", 00:08:09.675 "trsvcid": "4420", 00:08:09.675 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:09.675 }, 00:08:09.675 "ctrlr_data": { 00:08:09.675 "cntlid": 1, 00:08:09.675 "vendor_id": "0x8086", 00:08:09.675 "model_number": "SPDK bdev Controller", 00:08:09.675 "serial_number": "SPDK0", 00:08:09.675 "firmware_revision": "24.09", 00:08:09.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.675 "oacs": { 00:08:09.675 "security": 0, 00:08:09.675 "format": 0, 00:08:09.675 "firmware": 0, 00:08:09.675 "ns_manage": 0 00:08:09.675 }, 00:08:09.675 "multi_ctrlr": true, 00:08:09.675 "ana_reporting": false 00:08:09.675 }, 00:08:09.675 "vs": { 00:08:09.675 "nvme_version": "1.3" 00:08:09.675 }, 00:08:09.675 "ns_data": { 00:08:09.675 "id": 1, 00:08:09.675 "can_share": true 00:08:09.675 } 00:08:09.675 } 00:08:09.675 ], 00:08:09.675 "mp_policy": "active_passive" 00:08:09.675 } 00:08:09.675 } 00:08:09.675 ] 00:08:09.675 09:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66284 00:08:09.675 09:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:09.675 09:34:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:09.675 Running I/O for 10 seconds... 00:08:10.608 Latency(us) 00:08:10.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.608 Nvme0n1 : 1.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:10.608 =================================================================================================================== 00:08:10.608 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:10.608 00:08:11.540 09:34:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 631505fa-a455-4a20-a350-018f178e7816 00:08:11.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.840 Nvme0n1 : 2.00 7439.50 29.06 0.00 0.00 0.00 0.00 0.00 00:08:11.840 =================================================================================================================== 00:08:11.840 Total : 7439.50 29.06 0.00 0.00 0.00 0.00 0.00 00:08:11.840 00:08:11.840 true 00:08:11.840 09:34:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:11.840 09:34:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:12.148 09:34:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:12.148 09:34:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:12.148 09:34:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66284 00:08:12.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.715 Nvme0n1 : 3.00 7626.67 29.79 0.00 0.00 0.00 0.00 0.00 00:08:12.716 =================================================================================================================== 00:08:12.716 Total : 7626.67 29.79 0.00 0.00 0.00 0.00 0.00 00:08:12.716 00:08:13.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.650 Nvme0n1 : 4.00 7549.25 29.49 0.00 0.00 0.00 0.00 0.00 00:08:13.650 =================================================================================================================== 00:08:13.650 Total : 7549.25 29.49 0.00 0.00 0.00 0.00 0.00 00:08:13.650 00:08:14.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.581 Nvme0n1 : 5.00 7106.20 27.76 0.00 0.00 0.00 0.00 0.00 00:08:14.581 =================================================================================================================== 00:08:14.581 Total : 7106.20 27.76 0.00 0.00 0.00 0.00 0.00 00:08:14.581 00:08:16.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.021 Nvme0n1 : 6.00 7043.67 27.51 0.00 0.00 0.00 0.00 0.00 00:08:16.021 =================================================================================================================== 00:08:16.021 Total : 7043.67 27.51 0.00 0.00 0.00 0.00 0.00 00:08:16.021 00:08:16.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.602 Nvme0n1 : 7.00 7071.57 27.62 0.00 0.00 0.00 0.00 0.00 00:08:16.602 =================================================================================================================== 00:08:16.602 Total : 7071.57 27.62 0.00 0.00 0.00 0.00 0.00 00:08:16.602 00:08:17.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.975 Nvme0n1 : 8.00 7092.50 27.71 0.00 0.00 0.00 0.00 0.00 00:08:17.975 =================================================================================================================== 00:08:17.975 Total : 7092.50 27.71 0.00 0.00 0.00 0.00 0.00 00:08:17.975 00:08:18.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.910 Nvme0n1 : 9.00 7108.78 27.77 0.00 0.00 0.00 0.00 0.00 00:08:18.910 =================================================================================================================== 00:08:18.910 Total : 7108.78 27.77 0.00 0.00 0.00 0.00 0.00 00:08:18.910 00:08:19.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.844 Nvme0n1 : 10.00 7109.10 27.77 0.00 0.00 0.00 0.00 0.00 00:08:19.844 =================================================================================================================== 00:08:19.844 Total : 7109.10 27.77 0.00 0.00 0.00 0.00 0.00 00:08:19.844 00:08:19.844 00:08:19.844 Latency(us) 00:08:19.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.844 Nvme0n1 : 10.01 7114.60 27.79 0.00 0.00 17985.73 10307.03 280255.77 00:08:19.844 =================================================================================================================== 00:08:19.844 Total : 7114.60 27.79 0.00 0.00 17985.73 10307.03 280255.77 00:08:19.844 0 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66256 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66256 ']' 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66256 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66256 00:08:19.844 killing process with pid 66256 00:08:19.844 Received shutdown signal, test time was about 10.000000 seconds 00:08:19.844 00:08:19.844 Latency(us) 00:08:19.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.844 =================================================================================================================== 00:08:19.844 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66256' 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66256 00:08:19.844 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66256 00:08:20.101 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.359 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.617 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:20.617 09:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65888 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65888 00:08:20.875 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65888 Killed "${NVMF_APP[@]}" "$@" 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66418 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66418 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66418 ']' 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.875 09:34:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.875 [2024-07-15 09:34:15.210077] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:20.875 [2024-07-15 09:34:15.210389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.133 [2024-07-15 09:34:15.351292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.133 [2024-07-15 09:34:15.469074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.133 [2024-07-15 09:34:15.469428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.133 [2024-07-15 09:34:15.469575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.133 [2024-07-15 09:34:15.469691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.133 [2024-07-15 09:34:15.469703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.133 [2024-07-15 09:34:15.469743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.133 [2024-07-15 09:34:15.523084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.065 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.065 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:22.065 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.065 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.065 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.065 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.065 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.321 [2024-07-15 09:34:16.550709] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:22.321 [2024-07-15 09:34:16.551132] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:22.321 [2024-07-15 09:34:16.551460] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a44eb354-76b7-433b-b149-c7be025ae04c 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a44eb354-76b7-433b-b149-c7be025ae04c 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:22.321 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.578 09:34:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a44eb354-76b7-433b-b149-c7be025ae04c -t 2000 00:08:22.834 [ 00:08:22.834 { 00:08:22.834 "name": "a44eb354-76b7-433b-b149-c7be025ae04c", 00:08:22.834 "aliases": [ 00:08:22.834 "lvs/lvol" 00:08:22.834 ], 00:08:22.834 "product_name": "Logical Volume", 00:08:22.834 "block_size": 4096, 00:08:22.834 "num_blocks": 38912, 00:08:22.834 "uuid": "a44eb354-76b7-433b-b149-c7be025ae04c", 00:08:22.834 "assigned_rate_limits": { 00:08:22.834 "rw_ios_per_sec": 0, 00:08:22.834 "rw_mbytes_per_sec": 0, 00:08:22.834 "r_mbytes_per_sec": 0, 00:08:22.834 "w_mbytes_per_sec": 0 00:08:22.834 }, 00:08:22.835 "claimed": false, 00:08:22.835 "zoned": false, 00:08:22.835 "supported_io_types": { 00:08:22.835 "read": true, 00:08:22.835 "write": true, 00:08:22.835 "unmap": true, 00:08:22.835 "flush": false, 00:08:22.835 "reset": true, 00:08:22.835 "nvme_admin": false, 00:08:22.835 "nvme_io": false, 00:08:22.835 "nvme_io_md": false, 00:08:22.835 "write_zeroes": true, 00:08:22.835 "zcopy": false, 00:08:22.835 "get_zone_info": false, 00:08:22.835 "zone_management": false, 00:08:22.835 "zone_append": false, 00:08:22.835 "compare": false, 00:08:22.835 "compare_and_write": false, 00:08:22.835 "abort": false, 00:08:22.835 "seek_hole": true, 00:08:22.835 "seek_data": true, 00:08:22.835 "copy": false, 00:08:22.835 "nvme_iov_md": false 00:08:22.835 }, 00:08:22.835 "driver_specific": { 00:08:22.835 "lvol": { 00:08:22.835 "lvol_store_uuid": "631505fa-a455-4a20-a350-018f178e7816", 00:08:22.835 "base_bdev": "aio_bdev", 00:08:22.835 "thin_provision": false, 00:08:22.835 "num_allocated_clusters": 38, 00:08:22.835 "snapshot": false, 00:08:22.835 "clone": false, 00:08:22.835 "esnap_clone": false 00:08:22.835 } 00:08:22.835 } 00:08:22.835 } 00:08:22.835 ] 00:08:22.835 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:22.835 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:22.835 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.092 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.092 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.092 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:23.349 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.349 09:34:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.607 [2024-07-15 09:34:17.996626] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:23.607 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:23.926 request: 00:08:23.926 { 00:08:23.926 "uuid": "631505fa-a455-4a20-a350-018f178e7816", 00:08:23.926 "method": "bdev_lvol_get_lvstores", 00:08:23.926 "req_id": 1 00:08:23.926 } 00:08:23.926 Got JSON-RPC error response 00:08:23.926 response: 00:08:23.926 { 00:08:23.926 "code": -19, 00:08:23.926 "message": "No such device" 00:08:23.926 } 00:08:23.926 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:23.926 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:23.926 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:23.926 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:23.926 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.184 aio_bdev 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a44eb354-76b7-433b-b149-c7be025ae04c 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a44eb354-76b7-433b-b149-c7be025ae04c 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.441 09:34:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a44eb354-76b7-433b-b149-c7be025ae04c -t 2000 00:08:24.697 [ 00:08:24.697 { 00:08:24.697 "name": "a44eb354-76b7-433b-b149-c7be025ae04c", 00:08:24.697 "aliases": [ 00:08:24.697 "lvs/lvol" 00:08:24.697 ], 00:08:24.697 "product_name": "Logical Volume", 00:08:24.697 "block_size": 4096, 00:08:24.697 "num_blocks": 38912, 00:08:24.697 "uuid": "a44eb354-76b7-433b-b149-c7be025ae04c", 00:08:24.697 "assigned_rate_limits": { 00:08:24.697 "rw_ios_per_sec": 0, 00:08:24.697 "rw_mbytes_per_sec": 0, 00:08:24.697 "r_mbytes_per_sec": 0, 00:08:24.697 "w_mbytes_per_sec": 0 00:08:24.697 }, 00:08:24.697 "claimed": false, 00:08:24.697 "zoned": false, 00:08:24.697 "supported_io_types": { 00:08:24.697 "read": true, 00:08:24.697 "write": true, 00:08:24.697 "unmap": true, 00:08:24.697 "flush": false, 00:08:24.697 "reset": true, 00:08:24.697 "nvme_admin": false, 00:08:24.697 "nvme_io": false, 00:08:24.697 "nvme_io_md": false, 00:08:24.697 "write_zeroes": true, 00:08:24.697 "zcopy": false, 00:08:24.697 "get_zone_info": false, 00:08:24.697 "zone_management": false, 00:08:24.697 "zone_append": false, 00:08:24.697 "compare": false, 00:08:24.697 "compare_and_write": false, 00:08:24.697 "abort": false, 00:08:24.697 "seek_hole": true, 00:08:24.697 "seek_data": true, 00:08:24.697 "copy": false, 00:08:24.697 "nvme_iov_md": false 00:08:24.697 }, 00:08:24.697 "driver_specific": { 00:08:24.697 "lvol": { 00:08:24.697 "lvol_store_uuid": "631505fa-a455-4a20-a350-018f178e7816", 00:08:24.697 "base_bdev": "aio_bdev", 00:08:24.697 "thin_provision": false, 00:08:24.697 "num_allocated_clusters": 38, 00:08:24.697 "snapshot": false, 00:08:24.697 "clone": false, 00:08:24.697 "esnap_clone": false 00:08:24.697 } 00:08:24.697 } 00:08:24.697 } 00:08:24.697 ] 00:08:24.697 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:24.697 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.697 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:25.259 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.259 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 631505fa-a455-4a20-a350-018f178e7816 00:08:25.259 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.515 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.515 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a44eb354-76b7-433b-b149-c7be025ae04c 00:08:25.815 09:34:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 631505fa-a455-4a20-a350-018f178e7816 00:08:25.815 09:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.072 09:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.637 ************************************ 00:08:26.637 END TEST lvs_grow_dirty 00:08:26.637 ************************************ 00:08:26.637 00:08:26.637 real 0m21.969s 00:08:26.637 user 0m46.083s 00:08:26.637 sys 0m8.079s 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:26.637 nvmf_trace.0 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.637 09:34:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.637 rmmod nvme_tcp 00:08:26.637 rmmod nvme_fabrics 00:08:26.637 rmmod nvme_keyring 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66418 ']' 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66418 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66418 ']' 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66418 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66418 00:08:26.637 killing process with pid 66418 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66418' 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66418 00:08:26.637 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66418 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:26.894 00:08:26.894 real 0m43.200s 00:08:26.894 user 1m10.279s 00:08:26.894 sys 0m11.429s 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.894 09:34:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.894 ************************************ 00:08:26.894 END TEST nvmf_lvs_grow 00:08:26.894 ************************************ 00:08:26.894 09:34:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:26.894 09:34:21 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:27.152 09:34:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.152 09:34:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.152 09:34:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.152 ************************************ 00:08:27.152 START TEST nvmf_bdev_io_wait 00:08:27.152 ************************************ 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:27.152 * Looking for test storage... 00:08:27.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:27.152 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:27.153 Cannot find device "nvmf_tgt_br" 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.153 Cannot find device "nvmf_tgt_br2" 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:27.153 Cannot find device "nvmf_tgt_br" 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:27.153 Cannot find device "nvmf_tgt_br2" 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:27.153 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:27.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:08:27.411 00:08:27.411 --- 10.0.0.2 ping statistics --- 00:08:27.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.411 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:27.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:27.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:27.411 00:08:27.411 --- 10.0.0.3 ping statistics --- 00:08:27.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.411 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:27.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:27.411 00:08:27.411 --- 10.0.0.1 ping statistics --- 00:08:27.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.411 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66732 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66732 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66732 ']' 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.411 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.412 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.412 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.412 09:34:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.669 [2024-07-15 09:34:21.917722] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:27.669 [2024-07-15 09:34:21.917799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.669 [2024-07-15 09:34:22.053279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.927 [2024-07-15 09:34:22.200297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.927 [2024-07-15 09:34:22.200358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.927 [2024-07-15 09:34:22.200373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.927 [2024-07-15 09:34:22.200384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.927 [2024-07-15 09:34:22.200393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.927 [2024-07-15 09:34:22.200544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.927 [2024-07-15 09:34:22.201462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.927 [2024-07-15 09:34:22.201552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.927 [2024-07-15 09:34:22.201717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.860 09:34:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.860 09:34:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:28.860 09:34:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:28.860 09:34:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.860 09:34:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.860 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 [2024-07-15 09:34:23.081632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 [2024-07-15 09:34:23.094263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 Malloc0 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 [2024-07-15 09:34:23.154116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66773 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66775 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.861 { 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme$subsystem", 00:08:28.861 "trtype": "$TEST_TRANSPORT", 00:08:28.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "$NVMF_PORT", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.861 "hdgst": ${hdgst:-false}, 00:08:28.861 "ddgst": ${ddgst:-false} 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 } 00:08:28.861 EOF 00:08:28.861 )") 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66777 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.861 { 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme$subsystem", 00:08:28.861 "trtype": "$TEST_TRANSPORT", 00:08:28.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "$NVMF_PORT", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.861 "hdgst": ${hdgst:-false}, 00:08:28.861 "ddgst": ${ddgst:-false} 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 } 00:08:28.861 EOF 00:08:28.861 )") 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66780 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.861 { 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme$subsystem", 00:08:28.861 "trtype": "$TEST_TRANSPORT", 00:08:28.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "$NVMF_PORT", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.861 "hdgst": ${hdgst:-false}, 00:08:28.861 "ddgst": ${ddgst:-false} 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 } 00:08:28.861 EOF 00:08:28.861 )") 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme1", 00:08:28.861 "trtype": "tcp", 00:08:28.861 "traddr": "10.0.0.2", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "4420", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.861 "hdgst": false, 00:08:28.861 "ddgst": false 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 }' 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme1", 00:08:28.861 "trtype": "tcp", 00:08:28.861 "traddr": "10.0.0.2", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "4420", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.861 "hdgst": false, 00:08:28.861 "ddgst": false 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 }' 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.861 { 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme$subsystem", 00:08:28.861 "trtype": "$TEST_TRANSPORT", 00:08:28.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "$NVMF_PORT", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.861 "hdgst": ${hdgst:-false}, 00:08:28.861 "ddgst": ${ddgst:-false} 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 } 00:08:28.861 EOF 00:08:28.861 )") 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme1", 00:08:28.861 "trtype": "tcp", 00:08:28.861 "traddr": "10.0.0.2", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "4420", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.861 "hdgst": false, 00:08:28.861 "ddgst": false 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 }' 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.861 "params": { 00:08:28.861 "name": "Nvme1", 00:08:28.861 "trtype": "tcp", 00:08:28.861 "traddr": "10.0.0.2", 00:08:28.861 "adrfam": "ipv4", 00:08:28.861 "trsvcid": "4420", 00:08:28.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.861 "hdgst": false, 00:08:28.861 "ddgst": false 00:08:28.861 }, 00:08:28.861 "method": "bdev_nvme_attach_controller" 00:08:28.861 }' 00:08:28.861 [2024-07-15 09:34:23.214975] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:28.861 [2024-07-15 09:34:23.215561] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:28.861 [2024-07-15 09:34:23.225470] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:28.861 [2024-07-15 09:34:23.225853] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:28.861 09:34:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66773 00:08:28.861 [2024-07-15 09:34:23.246452] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:28.861 [2024-07-15 09:34:23.246537] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:28.861 [2024-07-15 09:34:23.262746] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:28.861 [2024-07-15 09:34:23.262851] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:29.117 [2024-07-15 09:34:23.433510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.117 [2024-07-15 09:34:23.506937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.117 [2024-07-15 09:34:23.557917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:29.373 [2024-07-15 09:34:23.584647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.373 [2024-07-15 09:34:23.608291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.373 [2024-07-15 09:34:23.619093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:29.373 [2024-07-15 09:34:23.666087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.373 [2024-07-15 09:34:23.668514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.373 [2024-07-15 09:34:23.705759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:29.373 Running I/O for 1 seconds... 00:08:29.373 [2024-07-15 09:34:23.756868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.373 Running I/O for 1 seconds... 00:08:29.373 [2024-07-15 09:34:23.776554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:29.373 [2024-07-15 09:34:23.823942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:29.631 Running I/O for 1 seconds... 00:08:29.631 Running I/O for 1 seconds... 00:08:30.562 00:08:30.562 Latency(us) 00:08:30.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.562 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:30.562 Nvme1n1 : 1.02 6600.69 25.78 0.00 0.00 19129.94 7477.06 39083.29 00:08:30.562 =================================================================================================================== 00:08:30.562 Total : 6600.69 25.78 0.00 0.00 19129.94 7477.06 39083.29 00:08:30.562 00:08:30.562 Latency(us) 00:08:30.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.562 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:30.562 Nvme1n1 : 1.00 142057.73 554.91 0.00 0.00 897.75 444.97 2442.71 00:08:30.562 =================================================================================================================== 00:08:30.562 Total : 142057.73 554.91 0.00 0.00 897.75 444.97 2442.71 00:08:30.562 00:08:30.562 Latency(us) 00:08:30.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.562 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:30.562 Nvme1n1 : 1.01 7592.35 29.66 0.00 0.00 16765.44 2546.97 28001.75 00:08:30.562 =================================================================================================================== 00:08:30.562 Total : 7592.35 29.66 0.00 0.00 16765.44 2546.97 28001.75 00:08:30.562 00:08:30.562 Latency(us) 00:08:30.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.562 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:30.562 Nvme1n1 : 1.01 6030.70 23.56 0.00 0.00 21116.73 9472.93 46470.98 00:08:30.562 =================================================================================================================== 00:08:30.562 Total : 6030.70 23.56 0.00 0.00 21116.73 9472.93 46470.98 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66775 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66777 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66780 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.819 rmmod nvme_tcp 00:08:30.819 rmmod nvme_fabrics 00:08:30.819 rmmod nvme_keyring 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66732 ']' 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66732 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66732 ']' 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66732 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.819 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66732 00:08:31.076 killing process with pid 66732 00:08:31.076 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.076 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.076 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66732' 00:08:31.076 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66732 00:08:31.076 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66732 00:08:31.334 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.334 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.334 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.334 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.334 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.334 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.335 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.335 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.335 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.335 ************************************ 00:08:31.335 END TEST nvmf_bdev_io_wait 00:08:31.335 ************************************ 00:08:31.335 00:08:31.335 real 0m4.206s 00:08:31.335 user 0m18.224s 00:08:31.335 sys 0m2.337s 00:08:31.335 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.335 09:34:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.335 09:34:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:31.335 09:34:25 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.335 09:34:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.335 09:34:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.335 09:34:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.335 ************************************ 00:08:31.335 START TEST nvmf_queue_depth 00:08:31.335 ************************************ 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.335 * Looking for test storage... 00:08:31.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.335 Cannot find device "nvmf_tgt_br" 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.335 Cannot find device "nvmf_tgt_br2" 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.335 Cannot find device "nvmf_tgt_br" 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.335 Cannot find device "nvmf_tgt_br2" 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.335 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.594 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:08:31.595 00:08:31.595 --- 10.0.0.2 ping statistics --- 00:08:31.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.595 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:08:31.595 00:08:31.595 --- 10.0.0.3 ping statistics --- 00:08:31.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.595 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:31.595 00:08:31.595 --- 10.0.0.1 ping statistics --- 00:08:31.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.595 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=67005 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 67005 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67005 ']' 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.595 09:34:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.595 [2024-07-15 09:34:26.037034] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:31.595 [2024-07-15 09:34:26.037129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.852 [2024-07-15 09:34:26.172045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.852 [2024-07-15 09:34:26.285299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.852 [2024-07-15 09:34:26.285396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.852 [2024-07-15 09:34:26.285409] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.852 [2024-07-15 09:34:26.285418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.852 [2024-07-15 09:34:26.285425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.852 [2024-07-15 09:34:26.285457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.110 [2024-07-15 09:34:26.341269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:32.677 09:34:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.677 09:34:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:32.677 09:34:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.677 09:34:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.677 09:34:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.677 [2024-07-15 09:34:27.015824] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.677 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.677 Malloc0 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.678 [2024-07-15 09:34:27.087827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67037 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67037 /var/tmp/bdevperf.sock 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67037 ']' 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.678 09:34:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.678 [2024-07-15 09:34:27.140078] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:32.678 [2024-07-15 09:34:27.140406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67037 ] 00:08:32.936 [2024-07-15 09:34:27.278604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.209 [2024-07-15 09:34:27.414906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.209 [2024-07-15 09:34:27.475763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.774 09:34:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.774 09:34:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:33.774 09:34:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:33.774 09:34:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.774 09:34:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.774 NVMe0n1 00:08:33.774 09:34:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.774 09:34:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.032 Running I/O for 10 seconds... 00:08:43.998 00:08:43.998 Latency(us) 00:08:43.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.998 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:43.998 Verification LBA range: start 0x0 length 0x4000 00:08:43.998 NVMe0n1 : 10.08 7575.09 29.59 0.00 0.00 134477.70 22401.40 103427.72 00:08:43.998 =================================================================================================================== 00:08:43.998 Total : 7575.09 29.59 0.00 0.00 134477.70 22401.40 103427.72 00:08:43.998 0 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67037 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67037 ']' 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67037 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67037 00:08:43.998 killing process with pid 67037 00:08:43.998 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.998 00:08:43.998 Latency(us) 00:08:43.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.998 =================================================================================================================== 00:08:43.998 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67037' 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67037 00:08:43.998 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67037 00:08:44.257 09:34:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:44.257 09:34:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:44.257 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.257 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.516 rmmod nvme_tcp 00:08:44.516 rmmod nvme_fabrics 00:08:44.516 rmmod nvme_keyring 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 67005 ']' 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 67005 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67005 ']' 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67005 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67005 00:08:44.516 killing process with pid 67005 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67005' 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67005 00:08:44.516 09:34:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67005 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:44.774 00:08:44.774 real 0m13.475s 00:08:44.774 user 0m23.435s 00:08:44.774 sys 0m2.208s 00:08:44.774 ************************************ 00:08:44.774 END TEST nvmf_queue_depth 00:08:44.774 ************************************ 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.774 09:34:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.774 09:34:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.774 09:34:39 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.774 09:34:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.774 09:34:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.774 09:34:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.774 ************************************ 00:08:44.774 START TEST nvmf_target_multipath 00:08:44.774 ************************************ 00:08:44.774 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.774 * Looking for test storage... 00:08:44.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.774 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.774 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:44.774 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.774 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:08:44.775 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:45.034 Cannot find device "nvmf_tgt_br" 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.034 Cannot find device "nvmf_tgt_br2" 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:45.034 Cannot find device "nvmf_tgt_br" 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:45.034 Cannot find device "nvmf_tgt_br2" 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:45.034 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:45.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:45.293 00:08:45.293 --- 10.0.0.2 ping statistics --- 00:08:45.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.293 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:45.293 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:45.293 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.294 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:45.294 00:08:45.294 --- 10.0.0.3 ping statistics --- 00:08:45.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.294 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:45.294 00:08:45.294 --- 10.0.0.1 ping statistics --- 00:08:45.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.294 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67361 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67361 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67361 ']' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.294 09:34:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.294 [2024-07-15 09:34:39.691674] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:45.294 [2024-07-15 09:34:39.691767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.552 [2024-07-15 09:34:39.833764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.552 [2024-07-15 09:34:39.971008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.552 [2024-07-15 09:34:39.971075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.552 [2024-07-15 09:34:39.971097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.552 [2024-07-15 09:34:39.971115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.552 [2024-07-15 09:34:39.971128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.552 [2024-07-15 09:34:39.971259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.552 [2024-07-15 09:34:39.971360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.552 [2024-07-15 09:34:39.971883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.552 [2024-07-15 09:34:39.971926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.810 [2024-07-15 09:34:40.028943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:46.377 09:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.377 09:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:08:46.378 09:34:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.378 09:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.378 09:34:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:46.378 09:34:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.378 09:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.636 [2024-07-15 09:34:40.941831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.636 09:34:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:46.912 Malloc0 00:08:46.912 09:34:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:47.198 09:34:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.456 09:34:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.714 [2024-07-15 09:34:42.033018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.714 09:34:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:47.973 [2024-07-15 09:34:42.273261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:47.973 09:34:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:08:47.973 09:34:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:48.231 09:34:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.231 09:34:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.231 09:34:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.231 09:34:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.231 09:34:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67459 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:50.128 09:34:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:50.385 [global] 00:08:50.385 thread=1 00:08:50.385 invalidate=1 00:08:50.385 rw=randrw 00:08:50.385 time_based=1 00:08:50.385 runtime=6 00:08:50.385 ioengine=libaio 00:08:50.385 direct=1 00:08:50.385 bs=4096 00:08:50.385 iodepth=128 00:08:50.385 norandommap=0 00:08:50.385 numjobs=1 00:08:50.385 00:08:50.385 verify_dump=1 00:08:50.385 verify_backlog=512 00:08:50.385 verify_state_save=0 00:08:50.385 do_verify=1 00:08:50.385 verify=crc32c-intel 00:08:50.385 [job0] 00:08:50.385 filename=/dev/nvme0n1 00:08:50.385 Could not set queue depth (nvme0n1) 00:08:50.385 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.385 fio-3.35 00:08:50.385 Starting 1 thread 00:08:51.344 09:34:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:51.602 09:34:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:51.859 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:52.116 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:52.373 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:52.374 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:52.374 09:34:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67459 00:08:56.621 00:08:56.621 job0: (groupid=0, jobs=1): err= 0: pid=67480: Mon Jul 15 09:34:50 2024 00:08:56.621 read: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(249MiB/6000msec) 00:08:56.621 slat (usec): min=6, max=7759, avg=55.23, stdev=222.80 00:08:56.621 clat (usec): min=1313, max=18768, avg=8228.22, stdev=1542.28 00:08:56.621 lat (usec): min=1329, max=18782, avg=8283.45, stdev=1547.66 00:08:56.621 clat percentiles (usec): 00:08:56.621 | 1.00th=[ 4228], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 7504], 00:08:56.621 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8029], 60.00th=[ 8160], 00:08:56.621 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[11731], 00:08:56.621 | 99.00th=[13042], 99.50th=[14222], 99.90th=[16581], 99.95th=[16581], 00:08:56.621 | 99.99th=[18744] 00:08:56.621 bw ( KiB/s): min= 7072, max=27185, per=51.07%, avg=21702.00, stdev=6846.51, samples=11 00:08:56.621 iops : min= 1768, max= 6796, avg=5425.45, stdev=1711.59, samples=11 00:08:56.621 write: IOPS=6311, BW=24.7MiB/s (25.9MB/s)(129MiB/5233msec); 0 zone resets 00:08:56.621 slat (usec): min=12, max=2897, avg=64.55, stdev=154.51 00:08:56.621 clat (usec): min=856, max=16292, avg=7162.70, stdev=1358.86 00:08:56.621 lat (usec): min=889, max=16319, avg=7227.25, stdev=1363.78 00:08:56.621 clat percentiles (usec): 00:08:56.621 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 5538], 20.00th=[ 6652], 00:08:56.621 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:08:56.621 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8225], 95.00th=[ 9110], 00:08:56.621 | 99.00th=[11207], 99.50th=[11863], 99.90th=[13829], 99.95th=[14484], 00:08:56.621 | 99.99th=[14877] 00:08:56.621 bw ( KiB/s): min= 7528, max=26602, per=86.26%, avg=21778.55, stdev=6606.81, samples=11 00:08:56.621 iops : min= 1882, max= 6650, avg=5444.55, stdev=1651.63, samples=11 00:08:56.621 lat (usec) : 1000=0.01% 00:08:56.621 lat (msec) : 2=0.06%, 4=1.69%, 10=91.18%, 20=7.05% 00:08:56.621 cpu : usr=5.65%, sys=22.40%, ctx=5806, majf=0, minf=114 00:08:56.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:56.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.621 issued rwts: total=63738,33030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.621 00:08:56.621 Run status group 0 (all jobs): 00:08:56.621 READ: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=249MiB (261MB), run=6000-6000msec 00:08:56.621 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=129MiB (135MB), run=5233-5233msec 00:08:56.621 00:08:56.621 Disk stats (read/write): 00:08:56.621 nvme0n1: ios=62713/32518, merge=0/0, ticks=494891/218261, in_queue=713152, util=98.63% 00:08:56.621 09:34:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:08:56.879 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67555 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:57.137 09:34:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:57.137 [global] 00:08:57.137 thread=1 00:08:57.137 invalidate=1 00:08:57.137 rw=randrw 00:08:57.137 time_based=1 00:08:57.137 runtime=6 00:08:57.137 ioengine=libaio 00:08:57.137 direct=1 00:08:57.137 bs=4096 00:08:57.137 iodepth=128 00:08:57.137 norandommap=0 00:08:57.137 numjobs=1 00:08:57.137 00:08:57.137 verify_dump=1 00:08:57.137 verify_backlog=512 00:08:57.137 verify_state_save=0 00:08:57.137 do_verify=1 00:08:57.137 verify=crc32c-intel 00:08:57.137 [job0] 00:08:57.137 filename=/dev/nvme0n1 00:08:57.137 Could not set queue depth (nvme0n1) 00:08:57.395 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:57.395 fio-3.35 00:08:57.395 Starting 1 thread 00:08:58.330 09:34:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:08:58.595 09:34:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:58.853 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:08:59.111 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:59.369 09:34:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67555 00:09:03.577 00:09:03.577 job0: (groupid=0, jobs=1): err= 0: pid=67577: Mon Jul 15 09:34:57 2024 00:09:03.577 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(273MiB/6007msec) 00:09:03.577 slat (usec): min=2, max=8021, avg=42.75, stdev=190.88 00:09:03.578 clat (usec): min=494, max=16847, avg=7507.03, stdev=1947.52 00:09:03.578 lat (usec): min=515, max=16865, avg=7549.77, stdev=1963.10 00:09:03.578 clat percentiles (usec): 00:09:03.578 | 1.00th=[ 2933], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5735], 00:09:03.578 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8094], 00:09:03.578 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[11207], 00:09:03.578 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13960], 99.95th=[15270], 00:09:03.578 | 99.99th=[16712] 00:09:03.578 bw ( KiB/s): min= 8167, max=39273, per=54.33%, avg=25298.91, stdev=8964.18, samples=11 00:09:03.578 iops : min= 2041, max= 9818, avg=6324.64, stdev=2241.15, samples=11 00:09:03.578 write: IOPS=7077, BW=27.6MiB/s (29.0MB/s)(147MiB/5319msec); 0 zone resets 00:09:03.578 slat (usec): min=3, max=7828, avg=54.66, stdev=136.56 00:09:03.578 clat (usec): min=954, max=16580, avg=6308.17, stdev=1854.63 00:09:03.578 lat (usec): min=1020, max=16605, avg=6362.83, stdev=1869.43 00:09:03.578 clat percentiles (usec): 00:09:03.578 | 1.00th=[ 2671], 5.00th=[ 3294], 10.00th=[ 3687], 20.00th=[ 4293], 00:09:03.578 | 30.00th=[ 4948], 40.00th=[ 6259], 50.00th=[ 6915], 60.00th=[ 7242], 00:09:03.578 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8455], 00:09:03.578 | 99.00th=[11076], 99.50th=[11600], 99.90th=[14746], 99.95th=[16057], 00:09:03.578 | 99.99th=[16450] 00:09:03.578 bw ( KiB/s): min= 8287, max=39888, per=89.37%, avg=25301.00, stdev=8764.91, samples=11 00:09:03.578 iops : min= 2071, max= 9972, avg=6325.18, stdev=2191.37, samples=11 00:09:03.578 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:09:03.578 lat (msec) : 2=0.22%, 4=7.52%, 10=87.51%, 20=4.71% 00:09:03.578 cpu : usr=6.29%, sys=24.98%, ctx=6496, majf=0, minf=92 00:09:03.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:03.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.578 issued rwts: total=69924,37646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.578 00:09:03.578 Run status group 0 (all jobs): 00:09:03.578 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=273MiB (286MB), run=6007-6007msec 00:09:03.578 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=147MiB (154MB), run=5319-5319msec 00:09:03.578 00:09:03.578 Disk stats (read/write): 00:09:03.578 nvme0n1: ios=69070/37066, merge=0/0, ticks=489913/214434, in_queue=704347, util=98.68% 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:03.578 09:34:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.896 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.896 rmmod nvme_tcp 00:09:03.896 rmmod nvme_fabrics 00:09:03.896 rmmod nvme_keyring 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67361 ']' 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67361 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67361 ']' 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67361 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67361 00:09:03.897 killing process with pid 67361 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67361' 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67361 00:09:03.897 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67361 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.155 09:34:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:04.429 ************************************ 00:09:04.429 END TEST nvmf_target_multipath 00:09:04.429 ************************************ 00:09:04.429 00:09:04.429 real 0m19.473s 00:09:04.429 user 1m13.231s 00:09:04.429 sys 0m9.656s 00:09:04.429 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.429 09:34:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:04.429 09:34:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.429 09:34:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.429 09:34:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.429 09:34:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.429 09:34:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.429 ************************************ 00:09:04.429 START TEST nvmf_zcopy 00:09:04.429 ************************************ 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.429 * Looking for test storage... 00:09:04.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.429 09:34:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:04.430 Cannot find device "nvmf_tgt_br" 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.430 Cannot find device "nvmf_tgt_br2" 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:04.430 Cannot find device "nvmf_tgt_br" 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:04.430 Cannot find device "nvmf_tgt_br2" 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:04.430 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:04.687 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.687 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:04.687 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.687 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:04.688 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.688 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.688 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.688 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.688 09:34:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.688 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:04.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:09:04.945 00:09:04.945 --- 10.0.0.2 ping statistics --- 00:09:04.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.945 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:04.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:09:04.945 00:09:04.945 --- 10.0.0.3 ping statistics --- 00:09:04.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.945 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:04.945 00:09:04.945 --- 10.0.0.1 ping statistics --- 00:09:04.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.945 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.945 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67829 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67829 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67829 ']' 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.946 09:34:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:04.946 [2024-07-15 09:34:59.245054] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:04.946 [2024-07-15 09:34:59.245137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.946 [2024-07-15 09:34:59.384727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.203 [2024-07-15 09:34:59.518085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.203 [2024-07-15 09:34:59.518173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.203 [2024-07-15 09:34:59.518188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.203 [2024-07-15 09:34:59.518199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.203 [2024-07-15 09:34:59.518208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.203 [2024-07-15 09:34:59.518246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.203 [2024-07-15 09:34:59.578829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.136 [2024-07-15 09:35:00.283634] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.136 [2024-07-15 09:35:00.299715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.136 malloc0 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:06.136 { 00:09:06.136 "params": { 00:09:06.136 "name": "Nvme$subsystem", 00:09:06.136 "trtype": "$TEST_TRANSPORT", 00:09:06.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.136 "adrfam": "ipv4", 00:09:06.136 "trsvcid": "$NVMF_PORT", 00:09:06.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.136 "hdgst": ${hdgst:-false}, 00:09:06.136 "ddgst": ${ddgst:-false} 00:09:06.136 }, 00:09:06.136 "method": "bdev_nvme_attach_controller" 00:09:06.136 } 00:09:06.136 EOF 00:09:06.136 )") 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:06.136 09:35:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:06.136 "params": { 00:09:06.136 "name": "Nvme1", 00:09:06.136 "trtype": "tcp", 00:09:06.137 "traddr": "10.0.0.2", 00:09:06.137 "adrfam": "ipv4", 00:09:06.137 "trsvcid": "4420", 00:09:06.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.137 "hdgst": false, 00:09:06.137 "ddgst": false 00:09:06.137 }, 00:09:06.137 "method": "bdev_nvme_attach_controller" 00:09:06.137 }' 00:09:06.137 [2024-07-15 09:35:00.394034] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:06.137 [2024-07-15 09:35:00.394132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67862 ] 00:09:06.137 [2024-07-15 09:35:00.537473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.394 [2024-07-15 09:35:00.668164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.394 [2024-07-15 09:35:00.738651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.652 Running I/O for 10 seconds... 00:09:16.613 00:09:16.613 Latency(us) 00:09:16.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.613 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:16.613 Verification LBA range: start 0x0 length 0x1000 00:09:16.613 Nvme1n1 : 10.02 5665.38 44.26 0.00 0.00 22522.57 1765.00 35985.22 00:09:16.613 =================================================================================================================== 00:09:16.613 Total : 5665.38 44.26 0.00 0.00 22522.57 1765.00 35985.22 00:09:16.871 09:35:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67984 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:16.872 { 00:09:16.872 "params": { 00:09:16.872 "name": "Nvme$subsystem", 00:09:16.872 "trtype": "$TEST_TRANSPORT", 00:09:16.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.872 "adrfam": "ipv4", 00:09:16.872 "trsvcid": "$NVMF_PORT", 00:09:16.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.872 "hdgst": ${hdgst:-false}, 00:09:16.872 "ddgst": ${ddgst:-false} 00:09:16.872 }, 00:09:16.872 "method": "bdev_nvme_attach_controller" 00:09:16.872 } 00:09:16.872 EOF 00:09:16.872 )") 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:16.872 [2024-07-15 09:35:11.139405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.139663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:16.872 09:35:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:16.872 "params": { 00:09:16.872 "name": "Nvme1", 00:09:16.872 "trtype": "tcp", 00:09:16.872 "traddr": "10.0.0.2", 00:09:16.872 "adrfam": "ipv4", 00:09:16.872 "trsvcid": "4420", 00:09:16.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.872 "hdgst": false, 00:09:16.872 "ddgst": false 00:09:16.872 }, 00:09:16.872 "method": "bdev_nvme_attach_controller" 00:09:16.872 }' 00:09:16.872 [2024-07-15 09:35:11.151381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.151422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.163367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.163405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.175382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.175420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.187358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.187390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.192313] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:16.872 [2024-07-15 09:35:11.192449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67984 ] 00:09:16.872 [2024-07-15 09:35:11.199374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.199412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.211390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.211419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.223397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.223427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.235388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.235418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.247390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.247419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.259409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.259453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.271395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.271425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.283399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.283426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.295413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.295442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.307405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.307451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.319416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.319461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.331452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.872 [2024-07-15 09:35:11.331502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.872 [2024-07-15 09:35:11.334704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.130 [2024-07-15 09:35:11.343446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.343483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.355449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.355489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.367448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.367491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.379438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.379484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.391449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.391479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.403433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.403460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.415461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.415493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.427452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.427477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.439451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.439478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.451445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.451471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.463474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.463518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.467045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.131 [2024-07-15 09:35:11.475507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.475543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.487544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.487579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.499500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.499535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.511526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.511562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.523560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.523596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.535561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.535592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.535802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.131 [2024-07-15 09:35:11.547543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.547576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.559552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.559583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.571527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.571554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.583565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.583600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.131 [2024-07-15 09:35:11.595578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.131 [2024-07-15 09:35:11.595613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.607608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.607641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.619610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.619643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.631609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.631642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.643642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.643809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 Running I/O for 5 seconds... 00:09:17.391 [2024-07-15 09:35:11.655643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.655671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.674082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.674122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.688792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.688826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.698698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.698844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.714935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.715099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.731396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.731552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.749963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.750169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.764988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.765071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.780989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.781169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.790230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.790396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.806952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.806989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.824663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.824700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.841408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.841446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.391 [2024-07-15 09:35:11.857645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.391 [2024-07-15 09:35:11.857696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.874231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.874269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.890299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.890335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.908157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.908196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.923350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.923390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.941121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.941158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.956137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.956174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.971291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.971328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:11.986600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:11.986638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.005235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.005273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.020695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.020731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.037102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.037140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.047573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.047609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.063221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.063260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.078181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.078233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.093677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.093712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.650 [2024-07-15 09:35:12.112787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.650 [2024-07-15 09:35:12.112826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.128399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.128437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.144438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.144473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.161448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.161486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.178056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.178141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.194782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.194817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.210687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.210721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.220168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.220205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.235760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.235796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.254810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.254848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.270245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.270300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.288569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.288604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.303383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.303421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.320793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.320835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.336536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.336574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.354765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.354802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.908 [2024-07-15 09:35:12.369886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.908 [2024-07-15 09:35:12.369967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.166 [2024-07-15 09:35:12.387810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.387856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.402688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.402732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.418496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.418544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.436344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.436387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.451603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.451653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.467148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.467193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.484809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.484855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.500598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.500637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.519653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.519692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.535012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.535060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.552804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.552856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.567751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.567801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.577567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.577611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.593673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.593733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.610063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.610115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.167 [2024-07-15 09:35:12.626411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.167 [2024-07-15 09:35:12.626467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.642764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.642817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.652438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.652484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.668726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.668782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.684257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.684311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.700080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.700125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.718008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.718060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.733379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.733430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.743198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.743237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.759442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.759487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.775529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.775579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.792804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.792843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.809292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.809335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.827316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.827349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.847538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.847571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.863957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.863991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.425 [2024-07-15 09:35:12.881694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.425 [2024-07-15 09:35:12.881731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:12.896745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:12.896839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:12.912193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:12.912230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:12.929939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:12.929976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:12.945510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:12.945548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:12.954459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:12.954496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:12.970261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:12.970313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:12.988434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:12.988472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.003524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.003562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.013849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.013886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.029308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.029345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.044098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.044135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.060340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.060413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.077059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.077098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.093133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.093168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.110842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.110877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.126166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.126204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.683 [2024-07-15 09:35:13.141816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.683 [2024-07-15 09:35:13.141853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.160643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.160681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.175811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.175850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.185470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.185507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.201418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.201455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.218931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.218983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.234787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.234825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.252360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.252428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.268173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.268209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.285808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.285846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.300322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.300359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.317652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.317706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.334480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.334518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.350537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.350578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.367489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.367529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.385073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.385110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.400245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.400285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.943 [2024-07-15 09:35:13.409822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.943 [2024-07-15 09:35:13.409860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.424361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.424397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.439765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.439803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.457439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.457480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.472510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.472548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.487528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.487582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.503198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.503236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.519491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.519528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.528507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.528544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.545226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.545280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.563902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.563954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.579325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.579362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.595574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.595610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.611948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.612003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.631547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.631585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.645124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.645164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.200 [2024-07-15 09:35:13.661204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.200 [2024-07-15 09:35:13.661242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.458 [2024-07-15 09:35:13.678383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.458 [2024-07-15 09:35:13.678419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.458 [2024-07-15 09:35:13.694000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.458 [2024-07-15 09:35:13.694037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.458 [2024-07-15 09:35:13.711946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.458 [2024-07-15 09:35:13.711987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.458 [2024-07-15 09:35:13.727122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.727171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.736747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.736792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.752849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.752905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.769619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.769658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.786013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.786050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.796066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.796103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.807660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.807700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.824075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.824142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.835173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.835230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.847908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.847950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.863024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.863063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.879175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.879215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.898160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.898197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.459 [2024-07-15 09:35:13.912885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.459 [2024-07-15 09:35:13.912931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:13.934271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.716 [2024-07-15 09:35:13.934443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:13.950307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.716 [2024-07-15 09:35:13.950459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:13.960216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.716 [2024-07-15 09:35:13.960378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:13.972216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.716 [2024-07-15 09:35:13.972373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:13.992395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.716 [2024-07-15 09:35:13.992556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:14.003431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.716 [2024-07-15 09:35:14.003588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:14.018531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.716 [2024-07-15 09:35:14.018571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.716 [2024-07-15 09:35:14.035132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.035168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.051105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.051140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.060798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.060835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.075748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.075789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.091259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.091298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.100565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.100603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.117130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.117168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.135009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.135051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.150039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.150078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.165628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.165668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.717 [2024-07-15 09:35:14.183337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.717 [2024-07-15 09:35:14.183376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.198506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.198544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.214603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.214641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.231320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.231362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.247980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.248019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.264481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.264524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.280453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.280491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.298205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.298244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.313822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.313863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.323274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.323312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.339494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.339534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.356441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.356478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.373198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.373236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.389705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.389744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.406347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.406387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.424759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.424799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.975 [2024-07-15 09:35:14.439810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.975 [2024-07-15 09:35:14.439853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.449920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.449957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.465568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.465610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.477280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.477346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.489159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.489200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.505285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.505341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.521649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.521689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.540509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.540555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.555057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.555096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.565774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.565816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.578635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.578678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.591106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.591145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.603100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.603134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.614792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.614832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.626956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.627007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.639546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.639593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.652237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.652275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.663246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.663281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.675412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.675447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.690214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.690260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.234 [2024-07-15 09:35:14.701013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.234 [2024-07-15 09:35:14.701049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.493 [2024-07-15 09:35:14.713404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.493 [2024-07-15 09:35:14.713439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.493 [2024-07-15 09:35:14.729473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.493 [2024-07-15 09:35:14.729513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.493 [2024-07-15 09:35:14.745324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.493 [2024-07-15 09:35:14.745366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.493 [2024-07-15 09:35:14.756393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.756433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.770048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.770090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.785215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.785262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.802058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.802098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.818369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.818416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.835326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.835374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.852137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.852170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.868381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.868415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.878167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.878200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.889753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.889790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.905599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.905633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.923567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.923617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.939676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.939710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.494 [2024-07-15 09:35:14.949000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.494 [2024-07-15 09:35:14.949033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:14.961115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:14.961148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:14.972191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:14.972226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:14.986878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:14.986923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.005486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.005522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.020671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.020702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.037201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.037234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.055663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.055695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.066165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.066195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.080881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.080923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.097350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.097380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.116369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.116402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.131366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.131414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.140933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.140980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.156837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.156871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.173753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.173787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.191732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.752 [2024-07-15 09:35:15.191765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.752 [2024-07-15 09:35:15.206224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.753 [2024-07-15 09:35:15.206257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.221265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.221305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.230973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.231007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.247145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.247178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.263322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.263356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.280824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.280857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.297293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.297335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.315322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.315354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.330008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.330040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.345411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.345454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.362742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.362777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.379408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.379441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.396764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.396796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.411526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.411559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.428788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.428820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.442553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.442594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.457510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.457554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.015 [2024-07-15 09:35:15.475327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.015 [2024-07-15 09:35:15.475361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.490317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.490351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.500023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.500058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.516490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.516526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.532246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.532279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.542289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.542322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.558361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.558405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.573708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.573756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.588930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.588958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.605094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.605127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.622361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.622396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.636888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.636934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.651677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.651711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.666963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.667004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.676433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.676465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.692633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.692666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.709372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.709404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.273 [2024-07-15 09:35:15.725918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.273 [2024-07-15 09:35:15.725940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.532 [2024-07-15 09:35:15.743537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.532 [2024-07-15 09:35:15.743570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.532 [2024-07-15 09:35:15.758196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.532 [2024-07-15 09:35:15.758230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.532 [2024-07-15 09:35:15.774523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.532 [2024-07-15 09:35:15.774557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.791185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.791221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.807330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.807361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.825356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.825389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.840804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.840837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.858401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.858449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.873190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.873226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.882448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.882481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.897092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.897127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.913864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.913918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.930820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.930850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.948152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.948184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.963921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.963944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.982814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.982848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.533 [2024-07-15 09:35:15.997946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.533 [2024-07-15 09:35:15.997975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.007805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.007851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.024117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.024151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.039183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.039216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.055432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.055480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.073827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.073859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.088375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.088439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.104147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.104177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.121657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.121688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.137105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.137136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.147060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.147090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.162446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.162476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.179881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.179933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.201445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.201476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.215321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.215354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.232930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.232965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.247755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.247798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.791 [2024-07-15 09:35:16.257005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.791 [2024-07-15 09:35:16.257041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.272560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.272593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.287383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.287416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.303295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.303329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.321249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.321282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.335987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.336023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.351402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.351447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.360594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.360628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.376406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.376439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.392754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.392791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.408417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.408452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.418292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.418325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.434588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.434622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.449107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.449141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.464621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.464660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.473942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.473978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.490050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.490085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.049 [2024-07-15 09:35:16.506662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.049 [2024-07-15 09:35:16.506699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.524087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.524122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.539226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.539260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.548646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.548680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.563669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.563704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.578418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.578469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.593537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.593573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.608934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.608967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.618308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.618341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.634277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.634311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.651907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.651952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 [2024-07-15 09:35:16.662767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.307 [2024-07-15 09:35:16.662800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.307 00:09:22.307 Latency(us) 00:09:22.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.307 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:22.307 Nvme1n1 : 5.01 11393.61 89.01 0.00 0.00 11221.23 3470.43 23116.33 00:09:22.308 =================================================================================================================== 00:09:22.308 Total : 11393.61 89.01 0.00 0.00 11221.23 3470.43 23116.33 00:09:22.308 [2024-07-15 09:35:16.674772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.674805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.686777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.686816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.698801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.698842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.710802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.710842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.722803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.722845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.734797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.734844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.746802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.746843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.758818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.758858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.308 [2024-07-15 09:35:16.770806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.308 [2024-07-15 09:35:16.770846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.564 [2024-07-15 09:35:16.782819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.564 [2024-07-15 09:35:16.782860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.564 [2024-07-15 09:35:16.794816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.564 [2024-07-15 09:35:16.794855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.564 [2024-07-15 09:35:16.806798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.564 [2024-07-15 09:35:16.806832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.564 [2024-07-15 09:35:16.818817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.564 [2024-07-15 09:35:16.818858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.564 [2024-07-15 09:35:16.830820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.564 [2024-07-15 09:35:16.830860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.565 [2024-07-15 09:35:16.842815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.565 [2024-07-15 09:35:16.842850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.565 [2024-07-15 09:35:16.854826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.565 [2024-07-15 09:35:16.854870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.565 [2024-07-15 09:35:16.866841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.565 [2024-07-15 09:35:16.866883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.565 [2024-07-15 09:35:16.878831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.565 [2024-07-15 09:35:16.878868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.565 [2024-07-15 09:35:16.890819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.565 [2024-07-15 09:35:16.890851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.565 [2024-07-15 09:35:16.902815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.565 [2024-07-15 09:35:16.902862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.565 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67984) - No such process 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67984 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 delay0 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.565 09:35:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:22.822 [2024-07-15 09:35:17.119474] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:29.369 Initializing NVMe Controllers 00:09:29.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:29.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:29.369 Initialization complete. Launching workers. 00:09:29.369 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 757 00:09:29.369 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1043, failed to submit 34 00:09:29.369 success 938, unsuccess 105, failed 0 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:29.369 rmmod nvme_tcp 00:09:29.369 rmmod nvme_fabrics 00:09:29.369 rmmod nvme_keyring 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67829 ']' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67829 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67829 ']' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67829 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67829 00:09:29.369 killing process with pid 67829 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67829' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67829 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67829 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:29.369 00:09:29.369 real 0m25.022s 00:09:29.369 user 0m40.768s 00:09:29.369 sys 0m7.057s 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.369 09:35:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.369 ************************************ 00:09:29.369 END TEST nvmf_zcopy 00:09:29.369 ************************************ 00:09:29.369 09:35:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:29.369 09:35:23 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:29.369 09:35:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.369 09:35:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.369 09:35:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.369 ************************************ 00:09:29.369 START TEST nvmf_nmic 00:09:29.369 ************************************ 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:29.369 * Looking for test storage... 00:09:29.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.369 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.370 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.370 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.370 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.370 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.628 09:35:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:29.629 Cannot find device "nvmf_tgt_br" 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.629 Cannot find device "nvmf_tgt_br2" 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:29.629 Cannot find device "nvmf_tgt_br" 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:29.629 Cannot find device "nvmf_tgt_br2" 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.629 09:35:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.629 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:29.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:09:29.888 00:09:29.888 --- 10.0.0.2 ping statistics --- 00:09:29.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.888 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:29.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:29.888 00:09:29.888 --- 10.0.0.3 ping statistics --- 00:09:29.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.888 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:29.888 00:09:29.888 --- 10.0.0.1 ping statistics --- 00:09:29.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.888 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68310 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68310 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68310 ']' 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.888 09:35:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:29.888 [2024-07-15 09:35:24.319948] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:29.888 [2024-07-15 09:35:24.320032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.147 [2024-07-15 09:35:24.454517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.147 [2024-07-15 09:35:24.578182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.147 [2024-07-15 09:35:24.578235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.147 [2024-07-15 09:35:24.578249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.147 [2024-07-15 09:35:24.578260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.147 [2024-07-15 09:35:24.578270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.147 [2024-07-15 09:35:24.578436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.147 [2024-07-15 09:35:24.578569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.147 [2024-07-15 09:35:24.578707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.147 [2024-07-15 09:35:24.579287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.406 [2024-07-15 09:35:24.636850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.972 [2024-07-15 09:35:25.390252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.972 Malloc0 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.972 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.231 [2024-07-15 09:35:25.457627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.231 test case1: single bdev can't be used in multiple subsystems 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.231 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.232 [2024-07-15 09:35:25.481458] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:31.232 [2024-07-15 09:35:25.481716] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:31.232 [2024-07-15 09:35:25.481746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.232 request: 00:09:31.232 { 00:09:31.232 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:31.232 "namespace": { 00:09:31.232 "bdev_name": "Malloc0", 00:09:31.232 "no_auto_visible": false 00:09:31.232 }, 00:09:31.232 "method": "nvmf_subsystem_add_ns", 00:09:31.232 "req_id": 1 00:09:31.232 } 00:09:31.232 Got JSON-RPC error response 00:09:31.232 response: 00:09:31.232 { 00:09:31.232 "code": -32602, 00:09:31.232 "message": "Invalid parameters" 00:09:31.232 } 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:31.232 Adding namespace failed - expected result. 00:09:31.232 test case2: host connect to nvmf target in multiple paths 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:31.232 [2024-07-15 09:35:25.497569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.232 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:31.489 09:35:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.489 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.489 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.489 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:31.489 09:35:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:33.407 09:35:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:33.407 09:35:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.407 09:35:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:33.407 09:35:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:33.407 09:35:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.407 09:35:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:33.407 09:35:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:33.407 [global] 00:09:33.407 thread=1 00:09:33.407 invalidate=1 00:09:33.407 rw=write 00:09:33.407 time_based=1 00:09:33.407 runtime=1 00:09:33.407 ioengine=libaio 00:09:33.407 direct=1 00:09:33.407 bs=4096 00:09:33.407 iodepth=1 00:09:33.407 norandommap=0 00:09:33.407 numjobs=1 00:09:33.407 00:09:33.407 verify_dump=1 00:09:33.407 verify_backlog=512 00:09:33.407 verify_state_save=0 00:09:33.407 do_verify=1 00:09:33.407 verify=crc32c-intel 00:09:33.407 [job0] 00:09:33.407 filename=/dev/nvme0n1 00:09:33.407 Could not set queue depth (nvme0n1) 00:09:33.665 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.665 fio-3.35 00:09:33.665 Starting 1 thread 00:09:34.596 00:09:34.596 job0: (groupid=0, jobs=1): err= 0: pid=68397: Mon Jul 15 09:35:29 2024 00:09:34.596 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:34.596 slat (nsec): min=12774, max=55034, avg=15555.98, stdev=3331.27 00:09:34.596 clat (usec): min=138, max=528, avg=174.82, stdev=17.66 00:09:34.596 lat (usec): min=152, max=543, avg=190.38, stdev=18.02 00:09:34.596 clat percentiles (usec): 00:09:34.596 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:09:34.596 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:09:34.596 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:09:34.596 | 99.00th=[ 219], 99.50th=[ 231], 99.90th=[ 289], 99.95th=[ 355], 00:09:34.596 | 99.99th=[ 529] 00:09:34.596 write: IOPS=3108, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1001msec); 0 zone resets 00:09:34.596 slat (nsec): min=15222, max=96727, avg=22333.75, stdev=5048.93 00:09:34.596 clat (usec): min=86, max=240, avg=107.22, stdev=12.59 00:09:34.596 lat (usec): min=105, max=290, avg=129.55, stdev=14.33 00:09:34.596 clat percentiles (usec): 00:09:34.596 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 98], 00:09:34.596 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:09:34.596 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 133], 00:09:34.596 | 99.00th=[ 145], 99.50th=[ 153], 99.90th=[ 178], 99.95th=[ 204], 00:09:34.596 | 99.99th=[ 241] 00:09:34.596 bw ( KiB/s): min=12263, max=12263, per=98.61%, avg=12263.00, stdev= 0.00, samples=1 00:09:34.596 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:34.596 lat (usec) : 100=14.93%, 250=84.88%, 500=0.18%, 750=0.02% 00:09:34.596 cpu : usr=2.60%, sys=9.00%, ctx=6184, majf=0, minf=2 00:09:34.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.596 issued rwts: total=3072,3112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.596 00:09:34.596 Run status group 0 (all jobs): 00:09:34.596 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:34.596 WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.2MiB (12.7MB), run=1001-1001msec 00:09:34.596 00:09:34.596 Disk stats (read/write): 00:09:34.596 nvme0n1: ios=2610/3070, merge=0/0, ticks=480/370, in_queue=850, util=91.28% 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.854 rmmod nvme_tcp 00:09:34.854 rmmod nvme_fabrics 00:09:34.854 rmmod nvme_keyring 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68310 ']' 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68310 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68310 ']' 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68310 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.854 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68310 00:09:34.854 killing process with pid 68310 00:09:34.855 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.855 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.855 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68310' 00:09:34.855 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68310 00:09:34.855 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68310 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.111 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.112 09:35:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:35.112 00:09:35.112 real 0m5.822s 00:09:35.112 user 0m18.445s 00:09:35.112 sys 0m2.247s 00:09:35.112 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.112 09:35:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.112 ************************************ 00:09:35.112 END TEST nvmf_nmic 00:09:35.112 ************************************ 00:09:35.370 09:35:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:35.370 09:35:29 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.370 09:35:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.370 09:35:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.370 09:35:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.370 ************************************ 00:09:35.370 START TEST nvmf_fio_target 00:09:35.370 ************************************ 00:09:35.370 09:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.370 * Looking for test storage... 00:09:35.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:35.371 Cannot find device "nvmf_tgt_br" 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.371 Cannot find device "nvmf_tgt_br2" 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:35.371 Cannot find device "nvmf_tgt_br" 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:35.371 Cannot find device "nvmf_tgt_br2" 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:35.371 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.630 09:35:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:35.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:09:35.630 00:09:35.630 --- 10.0.0.2 ping statistics --- 00:09:35.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.630 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:35.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:09:35.630 00:09:35.630 --- 10.0.0.3 ping statistics --- 00:09:35.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.630 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:35.630 00:09:35.630 --- 10.0.0.1 ping statistics --- 00:09:35.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.630 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68578 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68578 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68578 ']' 00:09:35.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.630 09:35:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.889 [2024-07-15 09:35:30.115178] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:35.889 [2024-07-15 09:35:30.115473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.889 [2024-07-15 09:35:30.252134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.152 [2024-07-15 09:35:30.400249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.152 [2024-07-15 09:35:30.400304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.153 [2024-07-15 09:35:30.400319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.153 [2024-07-15 09:35:30.400329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.153 [2024-07-15 09:35:30.400338] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.153 [2024-07-15 09:35:30.400468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.153 [2024-07-15 09:35:30.400591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.153 [2024-07-15 09:35:30.401318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.153 [2024-07-15 09:35:30.401345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.153 [2024-07-15 09:35:30.461103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.719 09:35:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.719 09:35:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:36.719 09:35:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:36.719 09:35:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.719 09:35:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.719 09:35:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.719 09:35:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:36.978 [2024-07-15 09:35:31.315703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.978 09:35:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.236 09:35:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:37.236 09:35:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.494 09:35:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:37.494 09:35:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.765 09:35:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:37.765 09:35:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.046 09:35:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:38.046 09:35:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:38.304 09:35:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.561 09:35:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:38.561 09:35:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.819 09:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:38.819 09:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.077 09:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:39.077 09:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:39.334 09:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.591 09:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.591 09:35:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.848 09:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.848 09:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:40.106 09:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.363 [2024-07-15 09:35:34.702120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.363 09:35:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:40.620 09:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:40.877 09:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.133 09:35:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:41.133 09:35:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.133 09:35:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.133 09:35:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:41.133 09:35:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:41.133 09:35:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.029 09:35:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.029 09:35:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.029 09:35:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.029 09:35:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:43.029 09:35:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.029 09:35:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:43.029 09:35:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:43.029 [global] 00:09:43.029 thread=1 00:09:43.029 invalidate=1 00:09:43.029 rw=write 00:09:43.029 time_based=1 00:09:43.029 runtime=1 00:09:43.029 ioengine=libaio 00:09:43.029 direct=1 00:09:43.029 bs=4096 00:09:43.029 iodepth=1 00:09:43.029 norandommap=0 00:09:43.029 numjobs=1 00:09:43.029 00:09:43.029 verify_dump=1 00:09:43.029 verify_backlog=512 00:09:43.029 verify_state_save=0 00:09:43.029 do_verify=1 00:09:43.029 verify=crc32c-intel 00:09:43.029 [job0] 00:09:43.029 filename=/dev/nvme0n1 00:09:43.029 [job1] 00:09:43.029 filename=/dev/nvme0n2 00:09:43.029 [job2] 00:09:43.029 filename=/dev/nvme0n3 00:09:43.029 [job3] 00:09:43.029 filename=/dev/nvme0n4 00:09:43.285 Could not set queue depth (nvme0n1) 00:09:43.285 Could not set queue depth (nvme0n2) 00:09:43.285 Could not set queue depth (nvme0n3) 00:09:43.286 Could not set queue depth (nvme0n4) 00:09:43.286 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.286 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.286 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.286 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.286 fio-3.35 00:09:43.286 Starting 4 threads 00:09:44.661 00:09:44.661 job0: (groupid=0, jobs=1): err= 0: pid=68768: Mon Jul 15 09:35:38 2024 00:09:44.661 read: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec) 00:09:44.661 slat (nsec): min=12181, max=53394, avg=20206.01, stdev=7705.88 00:09:44.661 clat (usec): min=155, max=2401, avg=294.02, stdev=88.40 00:09:44.661 lat (usec): min=183, max=2429, avg=314.23, stdev=90.21 00:09:44.661 clat percentiles (usec): 00:09:44.661 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 247], 00:09:44.661 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:09:44.661 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 437], 95.00th=[ 461], 00:09:44.661 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 775], 99.95th=[ 2409], 00:09:44.661 | 99.99th=[ 2409] 00:09:44.661 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:44.661 slat (usec): min=17, max=155, avg=27.84, stdev=10.10 00:09:44.661 clat (usec): min=92, max=498, avg=162.42, stdev=37.46 00:09:44.661 lat (usec): min=111, max=524, avg=190.26, stdev=40.49 00:09:44.661 clat percentiles (usec): 00:09:44.661 | 1.00th=[ 98], 5.00th=[ 103], 10.00th=[ 109], 20.00th=[ 122], 00:09:44.661 | 30.00th=[ 141], 40.00th=[ 161], 50.00th=[ 172], 60.00th=[ 178], 00:09:44.661 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 210], 00:09:44.661 | 99.00th=[ 231], 99.50th=[ 281], 99.90th=[ 355], 99.95th=[ 486], 00:09:44.661 | 99.99th=[ 498] 00:09:44.661 bw ( KiB/s): min= 8192, max= 8192, per=20.11%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.661 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.661 lat (usec) : 100=1.31%, 250=61.19%, 500=37.15%, 750=0.28%, 1000=0.05% 00:09:44.661 lat (msec) : 4=0.03% 00:09:44.661 cpu : usr=1.90%, sys=7.70%, ctx=3961, majf=0, minf=11 00:09:44.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.661 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.661 job1: (groupid=0, jobs=1): err= 0: pid=68769: Mon Jul 15 09:35:38 2024 00:09:44.661 read: IOPS=1837, BW=7349KiB/s (7525kB/s)(7356KiB/1001msec) 00:09:44.661 slat (usec): min=14, max=309, avg=21.05, stdev= 7.49 00:09:44.661 clat (usec): min=149, max=1032, avg=279.61, stdev=57.73 00:09:44.661 lat (usec): min=165, max=1050, avg=300.65, stdev=58.21 00:09:44.661 clat percentiles (usec): 00:09:44.661 | 1.00th=[ 190], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:09:44.661 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:09:44.661 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 367], 95.00th=[ 383], 00:09:44.661 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 766], 99.95th=[ 1029], 00:09:44.661 | 99.99th=[ 1029] 00:09:44.661 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:44.661 slat (usec): min=19, max=158, avg=28.45, stdev= 5.38 00:09:44.661 clat (usec): min=93, max=1019, avg=185.34, stdev=69.85 00:09:44.661 lat (usec): min=122, max=1060, avg=213.79, stdev=71.03 00:09:44.661 clat percentiles (usec): 00:09:44.661 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 122], 00:09:44.661 | 30.00th=[ 139], 40.00th=[ 163], 50.00th=[ 176], 60.00th=[ 188], 00:09:44.661 | 70.00th=[ 198], 80.00th=[ 217], 90.00th=[ 297], 95.00th=[ 310], 00:09:44.661 | 99.00th=[ 330], 99.50th=[ 375], 99.90th=[ 545], 99.95th=[ 881], 00:09:44.661 | 99.99th=[ 1020] 00:09:44.661 bw ( KiB/s): min= 8192, max= 8192, per=20.11%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.661 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.661 lat (usec) : 100=0.33%, 250=54.98%, 500=44.46%, 750=0.13%, 1000=0.05% 00:09:44.661 lat (msec) : 2=0.05% 00:09:44.661 cpu : usr=2.60%, sys=7.00%, ctx=3888, majf=0, minf=11 00:09:44.661 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.661 issued rwts: total=1839,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.661 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.661 job2: (groupid=0, jobs=1): err= 0: pid=68770: Mon Jul 15 09:35:38 2024 00:09:44.661 read: IOPS=2644, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:44.661 slat (nsec): min=12503, max=45615, avg=15423.57, stdev=2711.12 00:09:44.661 clat (usec): min=151, max=1524, avg=179.10, stdev=28.79 00:09:44.661 lat (usec): min=165, max=1539, avg=194.52, stdev=29.04 00:09:44.661 clat percentiles (usec): 00:09:44.661 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:09:44.661 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:09:44.661 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:09:44.662 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 233], 99.95th=[ 241], 00:09:44.662 | 99.99th=[ 1532] 00:09:44.662 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:44.662 slat (usec): min=14, max=127, avg=22.30, stdev= 6.23 00:09:44.662 clat (usec): min=102, max=1962, avg=132.16, stdev=36.49 00:09:44.662 lat (usec): min=120, max=1985, avg=154.45, stdev=37.49 00:09:44.662 clat percentiles (usec): 00:09:44.662 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 122], 00:09:44.662 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:09:44.662 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 151], 00:09:44.662 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 297], 99.95th=[ 603], 00:09:44.662 | 99.99th=[ 1958] 00:09:44.662 bw ( KiB/s): min=12288, max=12288, per=30.17%, avg=12288.00, stdev= 0.00, samples=1 00:09:44.662 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:44.662 lat (usec) : 250=99.91%, 500=0.03%, 750=0.02% 00:09:44.662 lat (msec) : 2=0.03% 00:09:44.662 cpu : usr=2.50%, sys=8.30%, ctx=5728, majf=0, minf=7 00:09:44.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.662 issued rwts: total=2647,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.662 job3: (groupid=0, jobs=1): err= 0: pid=68771: Mon Jul 15 09:35:38 2024 00:09:44.662 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:44.662 slat (nsec): min=13056, max=81969, avg=17948.24, stdev=4748.29 00:09:44.662 clat (usec): min=150, max=528, avg=181.78, stdev=13.80 00:09:44.662 lat (usec): min=167, max=544, avg=199.73, stdev=14.99 00:09:44.662 clat percentiles (usec): 00:09:44.662 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:44.662 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:09:44.662 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 202], 00:09:44.662 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 297], 99.95th=[ 297], 00:09:44.662 | 99.99th=[ 529] 00:09:44.662 write: IOPS=3021, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec); 0 zone resets 00:09:44.662 slat (nsec): min=15827, max=90669, avg=24152.37, stdev=4579.48 00:09:44.662 clat (usec): min=103, max=591, avg=133.66, stdev=14.59 00:09:44.662 lat (usec): min=123, max=614, avg=157.81, stdev=15.29 00:09:44.662 clat percentiles (usec): 00:09:44.662 | 1.00th=[ 111], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 126], 00:09:44.662 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 137], 00:09:44.662 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:09:44.662 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 255], 99.95th=[ 424], 00:09:44.662 | 99.99th=[ 594] 00:09:44.662 bw ( KiB/s): min=12288, max=12288, per=30.17%, avg=12288.00, stdev= 0.00, samples=1 00:09:44.662 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:44.662 lat (usec) : 250=99.87%, 500=0.09%, 750=0.04% 00:09:44.662 cpu : usr=2.00%, sys=9.90%, ctx=5585, majf=0, minf=6 00:09:44.662 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.662 issued rwts: total=2560,3025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.662 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.662 00:09:44.662 Run status group 0 (all jobs): 00:09:44.662 READ: bw=35.0MiB/s (36.7MB/s), 7349KiB/s-10.3MiB/s (7525kB/s-10.8MB/s), io=35.0MiB (36.7MB), run=1001-1001msec 00:09:44.662 WRITE: bw=39.8MiB/s (41.7MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.8MiB (41.8MB), run=1001-1001msec 00:09:44.662 00:09:44.662 Disk stats (read/write): 00:09:44.662 nvme0n1: ios=1586/1944, merge=0/0, ticks=477/327, in_queue=804, util=87.47% 00:09:44.662 nvme0n2: ios=1556/1756, merge=0/0, ticks=460/345, in_queue=805, util=87.70% 00:09:44.662 nvme0n3: ios=2315/2560, merge=0/0, ticks=432/363, in_queue=795, util=89.21% 00:09:44.662 nvme0n4: ios=2204/2560, merge=0/0, ticks=405/372, in_queue=777, util=89.77% 00:09:44.662 09:35:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:44.662 [global] 00:09:44.662 thread=1 00:09:44.662 invalidate=1 00:09:44.662 rw=randwrite 00:09:44.662 time_based=1 00:09:44.662 runtime=1 00:09:44.662 ioengine=libaio 00:09:44.662 direct=1 00:09:44.662 bs=4096 00:09:44.662 iodepth=1 00:09:44.662 norandommap=0 00:09:44.662 numjobs=1 00:09:44.662 00:09:44.662 verify_dump=1 00:09:44.662 verify_backlog=512 00:09:44.662 verify_state_save=0 00:09:44.662 do_verify=1 00:09:44.662 verify=crc32c-intel 00:09:44.662 [job0] 00:09:44.662 filename=/dev/nvme0n1 00:09:44.662 [job1] 00:09:44.662 filename=/dev/nvme0n2 00:09:44.662 [job2] 00:09:44.662 filename=/dev/nvme0n3 00:09:44.662 [job3] 00:09:44.662 filename=/dev/nvme0n4 00:09:44.662 Could not set queue depth (nvme0n1) 00:09:44.662 Could not set queue depth (nvme0n2) 00:09:44.662 Could not set queue depth (nvme0n3) 00:09:44.662 Could not set queue depth (nvme0n4) 00:09:44.662 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.662 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.662 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.662 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.662 fio-3.35 00:09:44.662 Starting 4 threads 00:09:46.036 00:09:46.036 job0: (groupid=0, jobs=1): err= 0: pid=68824: Mon Jul 15 09:35:40 2024 00:09:46.036 read: IOPS=2835, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:09:46.036 slat (nsec): min=11264, max=97385, avg=14493.78, stdev=3706.40 00:09:46.036 clat (usec): min=137, max=3001, avg=172.15, stdev=78.61 00:09:46.036 lat (usec): min=149, max=3027, avg=186.64, stdev=79.03 00:09:46.036 clat percentiles (usec): 00:09:46.036 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:46.036 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:09:46.036 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:09:46.036 | 99.00th=[ 221], 99.50th=[ 310], 99.90th=[ 1549], 99.95th=[ 2671], 00:09:46.036 | 99.99th=[ 2999] 00:09:46.036 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:46.036 slat (nsec): min=13866, max=90133, avg=22320.14, stdev=5737.14 00:09:46.036 clat (usec): min=98, max=2100, avg=127.23, stdev=51.18 00:09:46.036 lat (usec): min=117, max=2124, avg=149.55, stdev=51.54 00:09:46.036 clat percentiles (usec): 00:09:46.036 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 118], 00:09:46.036 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 127], 00:09:46.037 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:09:46.037 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 635], 99.95th=[ 1844], 00:09:46.037 | 99.99th=[ 2114] 00:09:46.037 bw ( KiB/s): min=12288, max=12288, per=30.02%, avg=12288.00, stdev= 0.00, samples=1 00:09:46.037 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:46.037 lat (usec) : 100=0.03%, 250=99.56%, 500=0.25%, 750=0.03%, 1000=0.03% 00:09:46.037 lat (msec) : 2=0.03%, 4=0.05% 00:09:46.037 cpu : usr=2.40%, sys=8.70%, ctx=5916, majf=0, minf=10 00:09:46.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 issued rwts: total=2838,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.037 job1: (groupid=0, jobs=1): err= 0: pid=68825: Mon Jul 15 09:35:40 2024 00:09:46.037 read: IOPS=1935, BW=7740KiB/s (7926kB/s)(7740KiB/1000msec) 00:09:46.037 slat (usec): min=8, max=800, avg=11.52, stdev=18.09 00:09:46.037 clat (usec): min=66, max=471, avg=269.70, stdev=33.88 00:09:46.037 lat (usec): min=187, max=866, avg=281.23, stdev=37.19 00:09:46.037 clat percentiles (usec): 00:09:46.037 | 1.00th=[ 227], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:09:46.037 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:09:46.037 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 338], 95.00th=[ 351], 00:09:46.037 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 461], 99.95th=[ 474], 00:09:46.037 | 99.99th=[ 474] 00:09:46.037 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:09:46.037 slat (nsec): min=13993, max=85917, avg=20134.77, stdev=4377.30 00:09:46.037 clat (usec): min=112, max=7851, avg=199.29, stdev=267.28 00:09:46.037 lat (usec): min=137, max=7870, avg=219.43, stdev=267.91 00:09:46.037 clat percentiles (usec): 00:09:46.037 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 165], 00:09:46.037 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:09:46.037 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 239], 00:09:46.037 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 5473], 99.95th=[ 6259], 00:09:46.037 | 99.99th=[ 7832] 00:09:46.037 bw ( KiB/s): min= 8192, max= 8192, per=20.01%, avg=8192.00, stdev= 0.00, samples=1 00:09:46.037 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:46.037 lat (usec) : 100=0.03%, 250=61.76%, 500=38.04%, 750=0.05% 00:09:46.037 lat (msec) : 4=0.03%, 10=0.10% 00:09:46.037 cpu : usr=1.60%, sys=5.00%, ctx=3985, majf=0, minf=15 00:09:46.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 issued rwts: total=1935,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.037 job2: (groupid=0, jobs=1): err= 0: pid=68826: Mon Jul 15 09:35:40 2024 00:09:46.037 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:46.037 slat (nsec): min=11215, max=39369, avg=14307.76, stdev=2063.00 00:09:46.037 clat (usec): min=159, max=1010, avg=266.82, stdev=37.01 00:09:46.037 lat (usec): min=172, max=1028, avg=281.13, stdev=37.35 00:09:46.037 clat percentiles (usec): 00:09:46.037 | 1.00th=[ 208], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:09:46.037 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:09:46.037 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 326], 95.00th=[ 338], 00:09:46.037 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 396], 99.95th=[ 437], 00:09:46.037 | 99.99th=[ 1012] 00:09:46.037 write: IOPS=2048, BW=8196KiB/s (8393kB/s)(8204KiB/1001msec); 0 zone resets 00:09:46.037 slat (nsec): min=11199, max=88383, avg=18393.24, stdev=5998.65 00:09:46.037 clat (usec): min=101, max=464, avg=185.09, stdev=31.02 00:09:46.037 lat (usec): min=126, max=486, avg=203.48, stdev=31.13 00:09:46.037 clat percentiles (usec): 00:09:46.037 | 1.00th=[ 113], 5.00th=[ 129], 10.00th=[ 143], 20.00th=[ 157], 00:09:46.037 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 196], 00:09:46.037 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 233], 00:09:46.037 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 269], 99.95th=[ 318], 00:09:46.037 | 99.99th=[ 465] 00:09:46.037 bw ( KiB/s): min= 8384, max= 8384, per=20.48%, avg=8384.00, stdev= 0.00, samples=1 00:09:46.037 iops : min= 2096, max= 2096, avg=2096.00, stdev= 0.00, samples=1 00:09:46.037 lat (usec) : 250=66.16%, 500=33.81% 00:09:46.037 lat (msec) : 2=0.02% 00:09:46.037 cpu : usr=1.50%, sys=5.70%, ctx=4107, majf=0, minf=9 00:09:46.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 issued rwts: total=2048,2051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.037 job3: (groupid=0, jobs=1): err= 0: pid=68827: Mon Jul 15 09:35:40 2024 00:09:46.037 read: IOPS=2828, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:09:46.037 slat (nsec): min=11460, max=44482, avg=13912.69, stdev=2438.02 00:09:46.037 clat (usec): min=147, max=413, avg=171.78, stdev=12.29 00:09:46.037 lat (usec): min=160, max=431, avg=185.69, stdev=13.06 00:09:46.037 clat percentiles (usec): 00:09:46.037 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 161], 00:09:46.037 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:09:46.037 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:09:46.037 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 225], 99.95th=[ 231], 00:09:46.037 | 99.99th=[ 416] 00:09:46.037 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:46.037 slat (nsec): min=14069, max=86524, avg=21218.87, stdev=4931.18 00:09:46.037 clat (usec): min=103, max=2295, avg=129.89, stdev=41.28 00:09:46.037 lat (usec): min=122, max=2322, avg=151.10, stdev=41.88 00:09:46.037 clat percentiles (usec): 00:09:46.037 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:09:46.037 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 131], 00:09:46.037 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:09:46.037 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 223], 99.95th=[ 400], 00:09:46.037 | 99.99th=[ 2311] 00:09:46.037 bw ( KiB/s): min=12288, max=12288, per=30.02%, avg=12288.00, stdev= 0.00, samples=1 00:09:46.037 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:46.037 lat (usec) : 250=99.93%, 500=0.05% 00:09:46.037 lat (msec) : 4=0.02% 00:09:46.037 cpu : usr=2.00%, sys=8.60%, ctx=5903, majf=0, minf=11 00:09:46.037 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.037 issued rwts: total=2831,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.037 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.037 00:09:46.037 Run status group 0 (all jobs): 00:09:46.037 READ: bw=37.7MiB/s (39.5MB/s), 7740KiB/s-11.1MiB/s (7926kB/s-11.6MB/s), io=37.7MiB (39.5MB), run=1000-1001msec 00:09:46.037 WRITE: bw=40.0MiB/s (41.9MB/s), 8192KiB/s-12.0MiB/s (8389kB/s-12.6MB/s), io=40.0MiB (42.0MB), run=1000-1001msec 00:09:46.037 00:09:46.037 Disk stats (read/write): 00:09:46.037 nvme0n1: ios=2569/2560, merge=0/0, ticks=455/349, in_queue=804, util=87.17% 00:09:46.037 nvme0n2: ios=1577/1865, merge=0/0, ticks=422/366, in_queue=788, util=86.65% 00:09:46.037 nvme0n3: ios=1548/2048, merge=0/0, ticks=416/352, in_queue=768, util=88.83% 00:09:46.037 nvme0n4: ios=2468/2560, merge=0/0, ticks=431/359, in_queue=790, util=89.76% 00:09:46.037 09:35:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:46.037 [global] 00:09:46.037 thread=1 00:09:46.037 invalidate=1 00:09:46.037 rw=write 00:09:46.037 time_based=1 00:09:46.037 runtime=1 00:09:46.037 ioengine=libaio 00:09:46.037 direct=1 00:09:46.037 bs=4096 00:09:46.037 iodepth=128 00:09:46.037 norandommap=0 00:09:46.037 numjobs=1 00:09:46.037 00:09:46.037 verify_dump=1 00:09:46.037 verify_backlog=512 00:09:46.037 verify_state_save=0 00:09:46.037 do_verify=1 00:09:46.037 verify=crc32c-intel 00:09:46.037 [job0] 00:09:46.037 filename=/dev/nvme0n1 00:09:46.037 [job1] 00:09:46.037 filename=/dev/nvme0n2 00:09:46.037 [job2] 00:09:46.037 filename=/dev/nvme0n3 00:09:46.037 [job3] 00:09:46.037 filename=/dev/nvme0n4 00:09:46.037 Could not set queue depth (nvme0n1) 00:09:46.037 Could not set queue depth (nvme0n2) 00:09:46.037 Could not set queue depth (nvme0n3) 00:09:46.037 Could not set queue depth (nvme0n4) 00:09:46.037 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.037 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.037 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.037 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.037 fio-3.35 00:09:46.037 Starting 4 threads 00:09:47.412 00:09:47.412 job0: (groupid=0, jobs=1): err= 0: pid=68885: Mon Jul 15 09:35:41 2024 00:09:47.412 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1002msec) 00:09:47.412 slat (usec): min=7, max=3640, avg=86.11, stdev=334.16 00:09:47.412 clat (usec): min=484, max=15609, avg=11408.69, stdev=1238.32 00:09:47.412 lat (usec): min=1948, max=15641, avg=11494.80, stdev=1263.75 00:09:47.412 clat percentiles (usec): 00:09:47.412 | 1.00th=[ 5997], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[10814], 00:09:47.412 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:09:47.412 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12780], 95.00th=[13304], 00:09:47.412 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14877], 99.95th=[15270], 00:09:47.412 | 99.99th=[15664] 00:09:47.412 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:47.412 slat (usec): min=9, max=3189, avg=84.31, stdev=356.51 00:09:47.412 clat (usec): min=8248, max=14812, avg=11197.04, stdev=892.43 00:09:47.412 lat (usec): min=8287, max=14863, avg=11281.35, stdev=951.96 00:09:47.412 clat percentiles (usec): 00:09:47.412 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:09:47.412 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:09:47.412 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12256], 95.00th=[13042], 00:09:47.412 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14615], 99.95th=[14746], 00:09:47.412 | 99.99th=[14877] 00:09:47.412 bw ( KiB/s): min=20912, max=24144, per=33.26%, avg=22528.00, stdev=2285.37, samples=2 00:09:47.412 iops : min= 5228, max= 6036, avg=5632.00, stdev=571.34, samples=2 00:09:47.412 lat (usec) : 500=0.01% 00:09:47.412 lat (msec) : 2=0.03%, 4=0.21%, 10=5.14%, 20=94.62% 00:09:47.412 cpu : usr=4.50%, sys=16.48%, ctx=501, majf=0, minf=9 00:09:47.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:47.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.412 issued rwts: total=5582,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.412 job1: (groupid=0, jobs=1): err= 0: pid=68886: Mon Jul 15 09:35:41 2024 00:09:47.412 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:09:47.412 slat (usec): min=6, max=4041, avg=85.45, stdev=333.04 00:09:47.412 clat (usec): min=8158, max=15775, avg=11298.46, stdev=993.17 00:09:47.412 lat (usec): min=8177, max=15806, avg=11383.91, stdev=1029.33 00:09:47.412 clat percentiles (usec): 00:09:47.412 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10552], 20.00th=[10683], 00:09:47.412 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:09:47.412 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12649], 95.00th=[13304], 00:09:47.412 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14877], 99.95th=[15533], 00:09:47.412 | 99.99th=[15795] 00:09:47.412 write: IOPS=5815, BW=22.7MiB/s (23.8MB/s)(22.7MiB/1001msec); 0 zone resets 00:09:47.412 slat (usec): min=9, max=2909, avg=81.55, stdev=349.09 00:09:47.412 clat (usec): min=296, max=14586, avg=10810.26, stdev=1146.13 00:09:47.412 lat (usec): min=2710, max=14605, avg=10891.80, stdev=1188.51 00:09:47.412 clat percentiles (usec): 00:09:47.412 | 1.00th=[ 5932], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10290], 00:09:47.413 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:09:47.413 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11731], 95.00th=[12780], 00:09:47.413 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14353], 99.95th=[14615], 00:09:47.413 | 99.99th=[14615] 00:09:47.413 bw ( KiB/s): min=24617, max=24617, per=36.35%, avg=24617.00, stdev= 0.00, samples=1 00:09:47.413 iops : min= 6154, max= 6154, avg=6154.00, stdev= 0.00, samples=1 00:09:47.413 lat (usec) : 500=0.01% 00:09:47.413 lat (msec) : 4=0.37%, 10=5.79%, 20=93.84% 00:09:47.413 cpu : usr=5.20%, sys=15.20%, ctx=541, majf=0, minf=9 00:09:47.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:47.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.413 issued rwts: total=5632,5821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.413 job2: (groupid=0, jobs=1): err= 0: pid=68888: Mon Jul 15 09:35:41 2024 00:09:47.413 read: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1003msec) 00:09:47.413 slat (usec): min=6, max=6534, avg=168.17, stdev=755.94 00:09:47.413 clat (usec): min=1342, max=39304, avg=21686.47, stdev=4689.82 00:09:47.413 lat (usec): min=3658, max=39351, avg=21854.64, stdev=4719.14 00:09:47.413 clat percentiles (usec): 00:09:47.413 | 1.00th=[ 8979], 5.00th=[14877], 10.00th=[16712], 20.00th=[17171], 00:09:47.413 | 30.00th=[17695], 40.00th=[20841], 50.00th=[23725], 60.00th=[23987], 00:09:47.413 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[28181], 00:09:47.413 | 99.00th=[35390], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:09:47.413 | 99.99th=[39060] 00:09:47.413 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:47.413 slat (usec): min=11, max=15301, avg=156.07, stdev=954.34 00:09:47.413 clat (usec): min=10697, max=39275, avg=20284.09, stdev=5459.80 00:09:47.413 lat (usec): min=10738, max=39309, avg=20440.16, stdev=5554.17 00:09:47.413 clat percentiles (usec): 00:09:47.413 | 1.00th=[11731], 5.00th=[12387], 10.00th=[13042], 20.00th=[13435], 00:09:47.413 | 30.00th=[14877], 40.00th=[21365], 50.00th=[22676], 60.00th=[23200], 00:09:47.413 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25035], 95.00th=[28181], 00:09:47.413 | 99.00th=[34866], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:09:47.413 | 99.99th=[39060] 00:09:47.413 bw ( KiB/s): min=12136, max=12464, per=18.16%, avg=12300.00, stdev=231.93, samples=2 00:09:47.413 iops : min= 3034, max= 3116, avg=3075.00, stdev=57.98, samples=2 00:09:47.413 lat (msec) : 2=0.02%, 4=0.05%, 10=1.00%, 20=36.03%, 50=62.90% 00:09:47.413 cpu : usr=2.20%, sys=10.48%, ctx=229, majf=0, minf=15 00:09:47.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:47.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.413 issued rwts: total=2942,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.413 job3: (groupid=0, jobs=1): err= 0: pid=68889: Mon Jul 15 09:35:41 2024 00:09:47.413 read: IOPS=2044, BW=8178KiB/s (8375kB/s)(8252KiB/1009msec) 00:09:47.413 slat (usec): min=6, max=10260, avg=195.90, stdev=903.99 00:09:47.413 clat (usec): min=7501, max=53364, avg=24732.83, stdev=4391.58 00:09:47.413 lat (usec): min=8707, max=59753, avg=24928.72, stdev=4418.38 00:09:47.413 clat percentiles (usec): 00:09:47.413 | 1.00th=[14877], 5.00th=[19006], 10.00th=[20841], 20.00th=[23462], 00:09:47.413 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:09:47.413 | 70.00th=[24773], 80.00th=[25297], 90.00th=[29754], 95.00th=[31589], 00:09:47.413 | 99.00th=[43779], 99.50th=[50070], 99.90th=[53216], 99.95th=[53216], 00:09:47.413 | 99.99th=[53216] 00:09:47.413 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:09:47.413 slat (usec): min=12, max=10485, avg=224.74, stdev=1083.72 00:09:47.413 clat (usec): min=10587, max=77562, avg=29903.87, stdev=14920.22 00:09:47.413 lat (usec): min=10630, max=77602, avg=30128.60, stdev=15027.23 00:09:47.413 clat percentiles (usec): 00:09:47.413 | 1.00th=[14353], 5.00th=[17695], 10.00th=[20317], 20.00th=[22676], 00:09:47.413 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[24249], 00:09:47.413 | 70.00th=[25297], 80.00th=[32637], 90.00th=[55313], 95.00th=[70779], 00:09:47.413 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:09:47.413 | 99.99th=[77071] 00:09:47.413 bw ( KiB/s): min= 7304, max=12280, per=14.46%, avg=9792.00, stdev=3518.56, samples=2 00:09:47.413 iops : min= 1826, max= 3070, avg=2448.00, stdev=879.64, samples=2 00:09:47.413 lat (msec) : 10=0.19%, 20=7.90%, 50=85.07%, 100=6.84% 00:09:47.413 cpu : usr=1.79%, sys=8.23%, ctx=241, majf=0, minf=10 00:09:47.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:09:47.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.413 issued rwts: total=2063,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.413 00:09:47.413 Run status group 0 (all jobs): 00:09:47.413 READ: bw=62.8MiB/s (65.8MB/s), 8178KiB/s-22.0MiB/s (8375kB/s-23.0MB/s), io=63.4MiB (66.4MB), run=1001-1009msec 00:09:47.413 WRITE: bw=66.1MiB/s (69.4MB/s), 9.91MiB/s-22.7MiB/s (10.4MB/s-23.8MB/s), io=66.7MiB (70.0MB), run=1001-1009msec 00:09:47.413 00:09:47.413 Disk stats (read/write): 00:09:47.413 nvme0n1: ios=4658/4959, merge=0/0, ticks=16499/15113, in_queue=31612, util=87.06% 00:09:47.413 nvme0n2: ios=4628/5074, merge=0/0, ticks=16684/15517, in_queue=32201, util=87.05% 00:09:47.413 nvme0n3: ios=2193/2560, merge=0/0, ticks=25231/24714, in_queue=49945, util=89.12% 00:09:47.413 nvme0n4: ios=2048/2183, merge=0/0, ticks=25204/25086, in_queue=50290, util=89.68% 00:09:47.413 09:35:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:47.413 [global] 00:09:47.413 thread=1 00:09:47.413 invalidate=1 00:09:47.413 rw=randwrite 00:09:47.413 time_based=1 00:09:47.413 runtime=1 00:09:47.413 ioengine=libaio 00:09:47.413 direct=1 00:09:47.413 bs=4096 00:09:47.413 iodepth=128 00:09:47.413 norandommap=0 00:09:47.413 numjobs=1 00:09:47.413 00:09:47.413 verify_dump=1 00:09:47.413 verify_backlog=512 00:09:47.413 verify_state_save=0 00:09:47.413 do_verify=1 00:09:47.413 verify=crc32c-intel 00:09:47.413 [job0] 00:09:47.413 filename=/dev/nvme0n1 00:09:47.413 [job1] 00:09:47.413 filename=/dev/nvme0n2 00:09:47.413 [job2] 00:09:47.413 filename=/dev/nvme0n3 00:09:47.413 [job3] 00:09:47.413 filename=/dev/nvme0n4 00:09:47.413 Could not set queue depth (nvme0n1) 00:09:47.413 Could not set queue depth (nvme0n2) 00:09:47.413 Could not set queue depth (nvme0n3) 00:09:47.413 Could not set queue depth (nvme0n4) 00:09:47.413 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.413 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.413 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.413 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.413 fio-3.35 00:09:47.413 Starting 4 threads 00:09:48.789 00:09:48.789 job0: (groupid=0, jobs=1): err= 0: pid=68948: Mon Jul 15 09:35:42 2024 00:09:48.789 read: IOPS=3707, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1004msec) 00:09:48.789 slat (usec): min=7, max=17177, avg=134.31, stdev=888.03 00:09:48.789 clat (usec): min=3160, max=41397, avg=18593.92, stdev=5203.32 00:09:48.789 lat (usec): min=4541, max=49273, avg=18728.23, stdev=5265.93 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[10028], 5.00th=[13566], 10.00th=[13698], 20.00th=[13960], 00:09:48.789 | 30.00th=[14484], 40.00th=[17695], 50.00th=[19530], 60.00th=[20055], 00:09:48.789 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21365], 95.00th=[23725], 00:09:48.789 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:48.789 | 99.99th=[41157] 00:09:48.789 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:48.789 slat (usec): min=6, max=16189, avg=114.41, stdev=751.08 00:09:48.789 clat (usec): min=5731, max=27680, avg=14159.31, stdev=4438.92 00:09:48.789 lat (usec): min=8212, max=27705, avg=14273.72, stdev=4422.73 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:09:48.789 | 30.00th=[10290], 40.00th=[10683], 50.00th=[13042], 60.00th=[15664], 00:09:48.789 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19530], 95.00th=[20055], 00:09:48.789 | 99.00th=[26870], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:09:48.789 | 99.99th=[27657] 00:09:48.789 bw ( KiB/s): min=14328, max=18440, per=30.43%, avg=16384.00, stdev=2907.62, samples=2 00:09:48.789 iops : min= 3582, max= 4610, avg=4096.00, stdev=726.91, samples=2 00:09:48.789 lat (msec) : 4=0.01%, 10=12.36%, 20=67.26%, 50=20.38% 00:09:48.789 cpu : usr=3.79%, sys=10.77%, ctx=167, majf=0, minf=7 00:09:48.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:48.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.789 issued rwts: total=3722,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.789 job1: (groupid=0, jobs=1): err= 0: pid=68949: Mon Jul 15 09:35:42 2024 00:09:48.789 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:09:48.789 slat (usec): min=3, max=10181, avg=167.17, stdev=610.94 00:09:48.789 clat (usec): min=12496, max=31352, avg=21165.21, stdev=2939.43 00:09:48.789 lat (usec): min=12511, max=31552, avg=21332.38, stdev=2932.69 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[15270], 5.00th=[16909], 10.00th=[18220], 20.00th=[19792], 00:09:48.789 | 30.00th=[20317], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:09:48.789 | 70.00th=[21103], 80.00th=[22152], 90.00th=[25560], 95.00th=[27132], 00:09:48.789 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:09:48.789 | 99.99th=[31327] 00:09:48.789 write: IOPS=3397, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1006msec); 0 zone resets 00:09:48.789 slat (usec): min=4, max=10040, avg=135.88, stdev=636.51 00:09:48.789 clat (usec): min=1836, max=28507, avg=18247.38, stdev=3758.35 00:09:48.789 lat (usec): min=5668, max=29584, avg=18383.26, stdev=3743.59 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[ 8225], 5.00th=[11863], 10.00th=[13173], 20.00th=[14746], 00:09:48.789 | 30.00th=[16450], 40.00th=[18220], 50.00th=[19268], 60.00th=[19792], 00:09:48.789 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21890], 95.00th=[24773], 00:09:48.789 | 99.00th=[26608], 99.50th=[27395], 99.90th=[27919], 99.95th=[28181], 00:09:48.789 | 99.99th=[28443] 00:09:48.789 bw ( KiB/s): min=13136, max=13184, per=24.44%, avg=13160.00, stdev=33.94, samples=2 00:09:48.789 iops : min= 3284, max= 3296, avg=3290.00, stdev= 8.49, samples=2 00:09:48.789 lat (msec) : 2=0.02%, 10=0.82%, 20=46.27%, 50=52.90% 00:09:48.789 cpu : usr=2.59%, sys=9.45%, ctx=718, majf=0, minf=8 00:09:48.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:48.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.789 issued rwts: total=3072,3418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.789 job2: (groupid=0, jobs=1): err= 0: pid=68950: Mon Jul 15 09:35:42 2024 00:09:48.789 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:48.789 slat (usec): min=8, max=6873, avg=161.22, stdev=571.13 00:09:48.789 clat (usec): min=13467, max=30762, avg=21268.66, stdev=2462.80 00:09:48.789 lat (usec): min=13477, max=31051, avg=21429.88, stdev=2451.78 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[14615], 5.00th=[17433], 10.00th=[19006], 20.00th=[19792], 00:09:48.789 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:09:48.789 | 70.00th=[21890], 80.00th=[22676], 90.00th=[25035], 95.00th=[26084], 00:09:48.789 | 99.00th=[28705], 99.50th=[28705], 99.90th=[30802], 99.95th=[30802], 00:09:48.789 | 99.99th=[30802] 00:09:48.789 write: IOPS=3239, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:09:48.789 slat (usec): min=5, max=10458, avg=148.45, stdev=686.20 00:09:48.789 clat (usec): min=322, max=28226, avg=18672.35, stdev=3403.31 00:09:48.789 lat (usec): min=2178, max=28995, avg=18820.80, stdev=3405.58 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[ 3032], 5.00th=[12649], 10.00th=[14615], 20.00th=[16909], 00:09:48.789 | 30.00th=[18220], 40.00th=[19006], 50.00th=[19268], 60.00th=[19792], 00:09:48.789 | 70.00th=[20055], 80.00th=[20317], 90.00th=[21627], 95.00th=[23200], 00:09:48.789 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28181], 99.95th=[28181], 00:09:48.789 | 99.99th=[28181] 00:09:48.789 bw ( KiB/s): min=12263, max=12263, per=22.78%, avg=12263.00, stdev= 0.00, samples=1 00:09:48.789 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:48.789 lat (usec) : 500=0.02% 00:09:48.789 lat (msec) : 4=0.51%, 10=0.51%, 20=43.44%, 50=55.53% 00:09:48.789 cpu : usr=2.60%, sys=9.20%, ctx=687, majf=0, minf=13 00:09:48.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:48.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.789 issued rwts: total=3072,3243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.789 job3: (groupid=0, jobs=1): err= 0: pid=68951: Mon Jul 15 09:35:42 2024 00:09:48.789 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:48.789 slat (usec): min=8, max=19839, avg=175.74, stdev=1032.08 00:09:48.789 clat (usec): min=13448, max=55427, avg=22368.25, stdev=6844.28 00:09:48.789 lat (usec): min=13473, max=55467, avg=22543.99, stdev=6909.33 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[13960], 5.00th=[15401], 10.00th=[17695], 20.00th=[19268], 00:09:48.789 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20055], 60.00th=[20317], 00:09:48.789 | 70.00th=[20841], 80.00th=[21627], 90.00th=[35390], 95.00th=[39584], 00:09:48.789 | 99.00th=[47449], 99.50th=[49546], 99.90th=[50070], 99.95th=[55313], 00:09:48.789 | 99.99th=[55313] 00:09:48.789 write: IOPS=2766, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1006msec); 0 zone resets 00:09:48.789 slat (usec): min=13, max=11706, avg=188.87, stdev=933.68 00:09:48.789 clat (usec): min=5710, max=63215, avg=25154.52, stdev=13098.82 00:09:48.789 lat (usec): min=7149, max=63237, avg=25343.39, stdev=13182.77 00:09:48.789 clat percentiles (usec): 00:09:48.789 | 1.00th=[11076], 5.00th=[11863], 10.00th=[13829], 20.00th=[17695], 00:09:48.789 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19530], 60.00th=[20055], 00:09:48.789 | 70.00th=[23725], 80.00th=[35914], 90.00th=[48497], 95.00th=[56361], 00:09:48.789 | 99.00th=[62129], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:09:48.789 | 99.99th=[63177] 00:09:48.789 bw ( KiB/s): min= 8920, max=12328, per=19.73%, avg=10624.00, stdev=2409.82, samples=2 00:09:48.789 iops : min= 2230, max= 3082, avg=2656.00, stdev=602.45, samples=2 00:09:48.789 lat (msec) : 10=0.47%, 20=52.16%, 50=42.56%, 100=4.81% 00:09:48.789 cpu : usr=2.29%, sys=9.25%, ctx=228, majf=0, minf=7 00:09:48.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:48.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.789 issued rwts: total=2560,2783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.790 00:09:48.790 Run status group 0 (all jobs): 00:09:48.790 READ: bw=48.2MiB/s (50.6MB/s), 9.94MiB/s-14.5MiB/s (10.4MB/s-15.2MB/s), io=48.5MiB (50.9MB), run=1001-1006msec 00:09:48.790 WRITE: bw=52.6MiB/s (55.1MB/s), 10.8MiB/s-15.9MiB/s (11.3MB/s-16.7MB/s), io=52.9MiB (55.5MB), run=1001-1006msec 00:09:48.790 00:09:48.790 Disk stats (read/write): 00:09:48.790 nvme0n1: ios=3122/3351, merge=0/0, ticks=55664/46648, in_queue=102312, util=87.96% 00:09:48.790 nvme0n2: ios=2604/2977, merge=0/0, ticks=24262/21632, in_queue=45894, util=87.94% 00:09:48.790 nvme0n3: ios=2560/2827, merge=0/0, ticks=23255/21843, in_queue=45098, util=87.98% 00:09:48.790 nvme0n4: ios=2111/2560, merge=0/0, ticks=23175/27369, in_queue=50544, util=89.78% 00:09:48.790 09:35:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:48.790 09:35:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68965 00:09:48.790 09:35:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:48.790 09:35:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:48.790 [global] 00:09:48.790 thread=1 00:09:48.790 invalidate=1 00:09:48.790 rw=read 00:09:48.790 time_based=1 00:09:48.790 runtime=10 00:09:48.790 ioengine=libaio 00:09:48.790 direct=1 00:09:48.790 bs=4096 00:09:48.790 iodepth=1 00:09:48.790 norandommap=1 00:09:48.790 numjobs=1 00:09:48.790 00:09:48.790 [job0] 00:09:48.790 filename=/dev/nvme0n1 00:09:48.790 [job1] 00:09:48.790 filename=/dev/nvme0n2 00:09:48.790 [job2] 00:09:48.790 filename=/dev/nvme0n3 00:09:48.790 [job3] 00:09:48.790 filename=/dev/nvme0n4 00:09:48.790 Could not set queue depth (nvme0n1) 00:09:48.790 Could not set queue depth (nvme0n2) 00:09:48.790 Could not set queue depth (nvme0n3) 00:09:48.790 Could not set queue depth (nvme0n4) 00:09:48.790 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.790 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.790 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.790 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.790 fio-3.35 00:09:48.790 Starting 4 threads 00:09:52.072 09:35:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:52.072 fio: pid=69008, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.072 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=61620224, buflen=4096 00:09:52.072 09:35:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:52.072 fio: pid=69007, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.072 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=49463296, buflen=4096 00:09:52.072 09:35:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.072 09:35:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:52.331 fio: pid=69005, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.331 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=56369152, buflen=4096 00:09:52.589 09:35:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.589 09:35:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:52.589 fio: pid=69006, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:52.589 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=19378176, buflen=4096 00:09:52.847 00:09:52.847 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69005: Mon Jul 15 09:35:47 2024 00:09:52.847 read: IOPS=3931, BW=15.4MiB/s (16.1MB/s)(53.8MiB/3501msec) 00:09:52.847 slat (usec): min=8, max=13285, avg=16.58, stdev=164.32 00:09:52.847 clat (usec): min=135, max=3142, avg=236.48, stdev=60.74 00:09:52.847 lat (usec): min=148, max=13567, avg=253.06, stdev=175.52 00:09:52.847 clat percentiles (usec): 00:09:52.847 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 176], 00:09:52.847 | 30.00th=[ 233], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:09:52.847 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:09:52.847 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 586], 99.95th=[ 824], 00:09:52.847 | 99.99th=[ 2024] 00:09:52.847 bw ( KiB/s): min=13640, max=20112, per=23.10%, avg=15201.33, stdev=2423.55, samples=6 00:09:52.847 iops : min= 3410, max= 5028, avg=3800.33, stdev=605.89, samples=6 00:09:52.847 lat (usec) : 250=45.67%, 500=54.20%, 750=0.04%, 1000=0.04% 00:09:52.847 lat (msec) : 2=0.02%, 4=0.01% 00:09:52.847 cpu : usr=1.20%, sys=4.89%, ctx=13770, majf=0, minf=1 00:09:52.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 issued rwts: total=13763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.847 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69006: Mon Jul 15 09:35:47 2024 00:09:52.847 read: IOPS=5602, BW=21.9MiB/s (22.9MB/s)(82.5MiB/3769msec) 00:09:52.847 slat (usec): min=10, max=11927, avg=16.16, stdev=153.56 00:09:52.847 clat (usec): min=125, max=2184, avg=160.81, stdev=39.25 00:09:52.847 lat (usec): min=138, max=12147, avg=176.97, stdev=159.18 00:09:52.847 clat percentiles (usec): 00:09:52.847 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:09:52.847 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:09:52.847 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:09:52.847 | 99.00th=[ 202], 99.50th=[ 297], 99.90th=[ 685], 99.95th=[ 955], 00:09:52.847 | 99.99th=[ 1680] 00:09:52.847 bw ( KiB/s): min=21524, max=23272, per=34.08%, avg=22424.57, stdev=724.10, samples=7 00:09:52.847 iops : min= 5381, max= 5818, avg=5606.14, stdev=181.03, samples=7 00:09:52.847 lat (usec) : 250=99.32%, 500=0.48%, 750=0.10%, 1000=0.04% 00:09:52.847 lat (msec) : 2=0.04%, 4=0.01% 00:09:52.847 cpu : usr=1.38%, sys=7.22%, ctx=21126, majf=0, minf=1 00:09:52.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 issued rwts: total=21116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.847 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69007: Mon Jul 15 09:35:47 2024 00:09:52.847 read: IOPS=3729, BW=14.6MiB/s (15.3MB/s)(47.2MiB/3238msec) 00:09:52.847 slat (usec): min=8, max=13315, avg=15.48, stdev=161.62 00:09:52.847 clat (usec): min=150, max=7940, avg=251.23, stdev=116.59 00:09:52.847 lat (usec): min=163, max=13547, avg=266.71, stdev=199.04 00:09:52.847 clat percentiles (usec): 00:09:52.847 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 235], 00:09:52.847 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:09:52.847 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:09:52.847 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 881], 99.95th=[ 2089], 00:09:52.847 | 99.99th=[ 6587] 00:09:52.847 bw ( KiB/s): min=14304, max=18400, per=22.85%, avg=15033.33, stdev=1650.01, samples=6 00:09:52.847 iops : min= 3576, max= 4600, avg=3758.33, stdev=412.50, samples=6 00:09:52.847 lat (usec) : 250=36.44%, 500=63.34%, 750=0.07%, 1000=0.06% 00:09:52.847 lat (msec) : 2=0.02%, 4=0.04%, 10=0.02% 00:09:52.847 cpu : usr=1.24%, sys=4.42%, ctx=12080, majf=0, minf=1 00:09:52.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 issued rwts: total=12077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.847 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69008: Mon Jul 15 09:35:47 2024 00:09:52.847 read: IOPS=5081, BW=19.8MiB/s (20.8MB/s)(58.8MiB/2961msec) 00:09:52.847 slat (nsec): min=11055, max=73410, avg=16636.35, stdev=4799.25 00:09:52.847 clat (usec): min=145, max=538, avg=178.43, stdev=15.43 00:09:52.847 lat (usec): min=157, max=550, avg=195.07, stdev=17.24 00:09:52.847 clat percentiles (usec): 00:09:52.847 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:09:52.847 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:09:52.847 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:09:52.847 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 265], 99.95th=[ 293], 00:09:52.847 | 99.99th=[ 498] 00:09:52.847 bw ( KiB/s): min=19536, max=21528, per=30.81%, avg=20272.00, stdev=828.82, samples=5 00:09:52.847 iops : min= 4884, max= 5382, avg=5068.00, stdev=207.21, samples=5 00:09:52.847 lat (usec) : 250=99.86%, 500=0.13%, 750=0.01% 00:09:52.847 cpu : usr=1.62%, sys=7.70%, ctx=15046, majf=0, minf=1 00:09:52.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.847 issued rwts: total=15045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.847 00:09:52.847 Run status group 0 (all jobs): 00:09:52.847 READ: bw=64.3MiB/s (67.4MB/s), 14.6MiB/s-21.9MiB/s (15.3MB/s-22.9MB/s), io=242MiB (254MB), run=2961-3769msec 00:09:52.847 00:09:52.847 Disk stats (read/write): 00:09:52.847 nvme0n1: ios=12991/0, merge=0/0, ticks=3083/0, in_queue=3083, util=95.31% 00:09:52.847 nvme0n2: ios=20235/0, merge=0/0, ticks=3355/0, in_queue=3355, util=95.50% 00:09:52.847 nvme0n3: ios=11622/0, merge=0/0, ticks=2801/0, in_queue=2801, util=95.75% 00:09:52.847 nvme0n4: ios=14557/0, merge=0/0, ticks=2650/0, in_queue=2650, util=96.76% 00:09:52.847 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.847 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:53.117 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.117 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:53.403 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.403 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:53.662 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.662 09:35:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:53.919 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.919 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68965 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.177 nvmf hotplug test: fio failed as expected 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:54.177 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.435 rmmod nvme_tcp 00:09:54.435 rmmod nvme_fabrics 00:09:54.435 rmmod nvme_keyring 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68578 ']' 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68578 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68578 ']' 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68578 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.435 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68578 00:09:54.691 killing process with pid 68578 00:09:54.691 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.691 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.691 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68578' 00:09:54.691 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68578 00:09:54.691 09:35:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68578 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.691 09:35:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.948 09:35:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:54.948 00:09:54.948 real 0m19.575s 00:09:54.948 user 1m13.440s 00:09:54.948 sys 0m10.833s 00:09:54.948 09:35:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.948 09:35:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.948 ************************************ 00:09:54.948 END TEST nvmf_fio_target 00:09:54.948 ************************************ 00:09:54.948 09:35:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:54.948 09:35:49 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.948 09:35:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:54.948 09:35:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.948 09:35:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.948 ************************************ 00:09:54.948 START TEST nvmf_bdevio 00:09:54.948 ************************************ 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:54.948 * Looking for test storage... 00:09:54.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.948 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:54.949 Cannot find device "nvmf_tgt_br" 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.949 Cannot find device "nvmf_tgt_br2" 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:54.949 Cannot find device "nvmf_tgt_br" 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:54.949 Cannot find device "nvmf_tgt_br2" 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:09:54.949 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:55.206 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:55.206 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.206 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:55.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:09:55.207 00:09:55.207 --- 10.0.0.2 ping statistics --- 00:09:55.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.207 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:55.207 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:55.207 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:55.207 00:09:55.207 --- 10.0.0.3 ping statistics --- 00:09:55.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.207 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:55.207 00:09:55.207 --- 10.0.0.1 ping statistics --- 00:09:55.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.207 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:55.207 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69276 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69276 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69276 ']' 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.465 09:35:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:55.465 [2024-07-15 09:35:49.772196] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:55.465 [2024-07-15 09:35:49.772330] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.465 [2024-07-15 09:35:49.926555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.723 [2024-07-15 09:35:50.050857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.723 [2024-07-15 09:35:50.051518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.723 [2024-07-15 09:35:50.052042] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.723 [2024-07-15 09:35:50.052694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.723 [2024-07-15 09:35:50.052985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.723 [2024-07-15 09:35:50.053428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:55.723 [2024-07-15 09:35:50.053660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:55.723 [2024-07-15 09:35:50.053560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:55.723 [2024-07-15 09:35:50.053662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.723 [2024-07-15 09:35:50.112943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.657 [2024-07-15 09:35:50.817794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.657 Malloc0 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.657 [2024-07-15 09:35:50.881553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:56.657 { 00:09:56.657 "params": { 00:09:56.657 "name": "Nvme$subsystem", 00:09:56.657 "trtype": "$TEST_TRANSPORT", 00:09:56.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.657 "adrfam": "ipv4", 00:09:56.657 "trsvcid": "$NVMF_PORT", 00:09:56.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.657 "hdgst": ${hdgst:-false}, 00:09:56.657 "ddgst": ${ddgst:-false} 00:09:56.657 }, 00:09:56.657 "method": "bdev_nvme_attach_controller" 00:09:56.657 } 00:09:56.657 EOF 00:09:56.657 )") 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:56.657 09:35:50 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:56.657 "params": { 00:09:56.657 "name": "Nvme1", 00:09:56.657 "trtype": "tcp", 00:09:56.657 "traddr": "10.0.0.2", 00:09:56.657 "adrfam": "ipv4", 00:09:56.657 "trsvcid": "4420", 00:09:56.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.657 "hdgst": false, 00:09:56.657 "ddgst": false 00:09:56.657 }, 00:09:56.657 "method": "bdev_nvme_attach_controller" 00:09:56.657 }' 00:09:56.657 [2024-07-15 09:35:50.941001] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:56.657 [2024-07-15 09:35:50.941086] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69318 ] 00:09:56.657 [2024-07-15 09:35:51.080467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.915 [2024-07-15 09:35:51.182818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.915 [2024-07-15 09:35:51.182944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.915 [2024-07-15 09:35:51.182943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.915 [2024-07-15 09:35:51.245782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:56.915 I/O targets: 00:09:56.915 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:56.915 00:09:56.915 00:09:56.915 CUnit - A unit testing framework for C - Version 2.1-3 00:09:56.915 http://cunit.sourceforge.net/ 00:09:56.915 00:09:56.915 00:09:56.915 Suite: bdevio tests on: Nvme1n1 00:09:56.915 Test: blockdev write read block ...passed 00:09:56.915 Test: blockdev write zeroes read block ...passed 00:09:56.915 Test: blockdev write zeroes read no split ...passed 00:09:57.173 Test: blockdev write zeroes read split ...passed 00:09:57.173 Test: blockdev write zeroes read split partial ...passed 00:09:57.173 Test: blockdev reset ...[2024-07-15 09:35:51.391981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:57.173 [2024-07-15 09:35:51.392081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8cf7c0 (9): Bad file descriptor 00:09:57.173 passed 00:09:57.174 Test: blockdev write read 8 blocks ...[2024-07-15 09:35:51.408091] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:57.174 passed 00:09:57.174 Test: blockdev write read size > 128k ...passed 00:09:57.174 Test: blockdev write read invalid size ...passed 00:09:57.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.174 Test: blockdev write read max offset ...passed 00:09:57.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.174 Test: blockdev writev readv 8 blocks ...passed 00:09:57.174 Test: blockdev writev readv 30 x 1block ...passed 00:09:57.174 Test: blockdev writev readv block ...passed 00:09:57.174 Test: blockdev writev readv size > 128k ...passed 00:09:57.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:57.174 Test: blockdev comparev and writev ...[2024-07-15 09:35:51.416617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.416981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.417120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.417232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.417784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.417954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.418081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.418176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.418581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.418839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.419096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.419440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.419872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.420144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.420410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL passed 00:09:57.174 Test: blockdev nvme passthru rw ...DATA BLOCK OFFSET 0x0 len:0x200 00:09:57.174 [2024-07-15 09:35:51.420675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:57.174 passed 00:09:57.174 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:35:51.421697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.174 [2024-07-15 09:35:51.421833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.422246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.174 [2024-07-15 09:35:51.422368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.422587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.174 [2024-07-15 09:35:51.422812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:57.174 [2024-07-15 09:35:51.423096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:57.174 [2024-07-15 09:35:51.423322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:09:57.174 Test: blockdev nvme admin passthru ...qhd:002f p:0 m:0 dnr:0 00:09:57.174 passed 00:09:57.174 Test: blockdev copy ...passed 00:09:57.174 00:09:57.174 Run Summary: Type Total Ran Passed Failed Inactive 00:09:57.174 suites 1 1 n/a 0 0 00:09:57.174 tests 23 23 23 0 0 00:09:57.174 asserts 152 152 152 0 n/a 00:09:57.174 00:09:57.174 Elapsed time = 0.164 seconds 00:09:57.174 09:35:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.174 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.174 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.445 rmmod nvme_tcp 00:09:57.445 rmmod nvme_fabrics 00:09:57.445 rmmod nvme_keyring 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69276 ']' 00:09:57.445 09:35:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69276 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69276 ']' 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69276 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69276 00:09:57.446 killing process with pid 69276 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69276' 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69276 00:09:57.446 09:35:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69276 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:57.726 00:09:57.726 real 0m2.835s 00:09:57.726 user 0m9.219s 00:09:57.726 sys 0m0.804s 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.726 ************************************ 00:09:57.726 END TEST nvmf_bdevio 00:09:57.726 ************************************ 00:09:57.726 09:35:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 09:35:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:57.726 09:35:52 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:57.726 09:35:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:57.726 09:35:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.726 09:35:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 ************************************ 00:09:57.726 START TEST nvmf_auth_target 00:09:57.726 ************************************ 00:09:57.726 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:57.985 * Looking for test storage... 00:09:57.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.985 09:35:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:57.986 Cannot find device "nvmf_tgt_br" 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.986 Cannot find device "nvmf_tgt_br2" 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:57.986 Cannot find device "nvmf_tgt_br" 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:57.986 Cannot find device "nvmf_tgt_br2" 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.986 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:57.986 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.245 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:58.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:58.246 00:09:58.246 --- 10.0.0.2 ping statistics --- 00:09:58.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.246 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:58.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:58.246 00:09:58.246 --- 10.0.0.3 ping statistics --- 00:09:58.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.246 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:58.246 00:09:58.246 --- 10.0.0.1 ping statistics --- 00:09:58.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.246 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69492 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69492 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69492 ']' 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:58.246 09:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69524 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=826765d501e1d9f3302308dd8e6291d6204934ec01a85785 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.F3H 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 826765d501e1d9f3302308dd8e6291d6204934ec01a85785 0 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 826765d501e1d9f3302308dd8e6291d6204934ec01a85785 0 00:09:59.619 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=826765d501e1d9f3302308dd8e6291d6204934ec01a85785 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.F3H 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.F3H 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.F3H 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=61f42d4ca9b003bb83cdafb2b9a29440c353e0c192f59105000493fabc1fd8e0 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sCb 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 61f42d4ca9b003bb83cdafb2b9a29440c353e0c192f59105000493fabc1fd8e0 3 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 61f42d4ca9b003bb83cdafb2b9a29440c353e0c192f59105000493fabc1fd8e0 3 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=61f42d4ca9b003bb83cdafb2b9a29440c353e0c192f59105000493fabc1fd8e0 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sCb 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sCb 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.sCb 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b8a8ac6e38a7288829afdb6c6f2f8eeb 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.M3h 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b8a8ac6e38a7288829afdb6c6f2f8eeb 1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b8a8ac6e38a7288829afdb6c6f2f8eeb 1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b8a8ac6e38a7288829afdb6c6f2f8eeb 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.M3h 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.M3h 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.M3h 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae519d9d1d55abfd1277b20ee157958af92ae46bff8f485d 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mcK 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae519d9d1d55abfd1277b20ee157958af92ae46bff8f485d 2 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae519d9d1d55abfd1277b20ee157958af92ae46bff8f485d 2 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae519d9d1d55abfd1277b20ee157958af92ae46bff8f485d 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mcK 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mcK 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.mcK 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=534d2f5e087d1bb104501ad5aff143a4a9f7f6cf4b951385 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WoB 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 534d2f5e087d1bb104501ad5aff143a4a9f7f6cf4b951385 2 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 534d2f5e087d1bb104501ad5aff143a4a9f7f6cf4b951385 2 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=534d2f5e087d1bb104501ad5aff143a4a9f7f6cf4b951385 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:09:59.620 09:35:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WoB 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WoB 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.WoB 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=84cc965abc8218b6262f9a7ba35e664e 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iqj 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 84cc965abc8218b6262f9a7ba35e664e 1 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 84cc965abc8218b6262f9a7ba35e664e 1 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=84cc965abc8218b6262f9a7ba35e664e 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iqj 00:09:59.620 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iqj 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.iqj 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d7c3d743e8188d094672e04bd6f83bae9701af1b2bf3cbd5ae57bcbc4b56e2be 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2HK 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d7c3d743e8188d094672e04bd6f83bae9701af1b2bf3cbd5ae57bcbc4b56e2be 3 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d7c3d743e8188d094672e04bd6f83bae9701af1b2bf3cbd5ae57bcbc4b56e2be 3 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d7c3d743e8188d094672e04bd6f83bae9701af1b2bf3cbd5ae57bcbc4b56e2be 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2HK 00:09:59.878 09:35:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2HK 00:09:59.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.2HK 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69492 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69492 ']' 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.879 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69524 /var/tmp/host.sock 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69524 ']' 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:00.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.137 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.395 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.395 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.F3H 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.F3H 00:10:00.396 09:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.F3H 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.sCb ]] 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sCb 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sCb 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sCb 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.M3h 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.M3h 00:10:00.963 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.M3h 00:10:01.221 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.mcK ]] 00:10:01.221 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mcK 00:10:01.221 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.221 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.221 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.221 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mcK 00:10:01.221 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mcK 00:10:01.479 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:01.479 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WoB 00:10:01.479 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.479 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.479 09:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.479 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.WoB 00:10:01.479 09:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.WoB 00:10:01.737 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.iqj ]] 00:10:01.737 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iqj 00:10:01.737 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.737 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.737 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.737 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iqj 00:10:01.737 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iqj 00:10:01.996 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:01.996 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2HK 00:10:01.996 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.996 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.996 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.996 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2HK 00:10:01.996 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2HK 00:10:02.255 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:02.255 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:02.255 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:02.255 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:02.255 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:02.255 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.519 09:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.086 00:10:03.086 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:03.086 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:03.086 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:03.343 { 00:10:03.343 "cntlid": 1, 00:10:03.343 "qid": 0, 00:10:03.343 "state": "enabled", 00:10:03.343 "thread": "nvmf_tgt_poll_group_000", 00:10:03.343 "listen_address": { 00:10:03.343 "trtype": "TCP", 00:10:03.343 "adrfam": "IPv4", 00:10:03.343 "traddr": "10.0.0.2", 00:10:03.343 "trsvcid": "4420" 00:10:03.343 }, 00:10:03.343 "peer_address": { 00:10:03.343 "trtype": "TCP", 00:10:03.343 "adrfam": "IPv4", 00:10:03.343 "traddr": "10.0.0.1", 00:10:03.343 "trsvcid": "42846" 00:10:03.343 }, 00:10:03.343 "auth": { 00:10:03.343 "state": "completed", 00:10:03.343 "digest": "sha256", 00:10:03.343 "dhgroup": "null" 00:10:03.343 } 00:10:03.343 } 00:10:03.343 ]' 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.343 09:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.601 09:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:08.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:08.862 09:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.862 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:08.863 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:09.120 00:10:09.120 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:09.120 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:09.120 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:09.379 { 00:10:09.379 "cntlid": 3, 00:10:09.379 "qid": 0, 00:10:09.379 "state": "enabled", 00:10:09.379 "thread": "nvmf_tgt_poll_group_000", 00:10:09.379 "listen_address": { 00:10:09.379 "trtype": "TCP", 00:10:09.379 "adrfam": "IPv4", 00:10:09.379 "traddr": "10.0.0.2", 00:10:09.379 "trsvcid": "4420" 00:10:09.379 }, 00:10:09.379 "peer_address": { 00:10:09.379 "trtype": "TCP", 00:10:09.379 "adrfam": "IPv4", 00:10:09.379 "traddr": "10.0.0.1", 00:10:09.379 "trsvcid": "53834" 00:10:09.379 }, 00:10:09.379 "auth": { 00:10:09.379 "state": "completed", 00:10:09.379 "digest": "sha256", 00:10:09.379 "dhgroup": "null" 00:10:09.379 } 00:10:09.379 } 00:10:09.379 ]' 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:09.379 09:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:09.636 09:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:10.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.571 09:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:10.830 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:11.088 00:10:11.088 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:11.088 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:11.088 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:11.346 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:11.346 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:11.346 09:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.346 09:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.346 09:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.346 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:11.346 { 00:10:11.346 "cntlid": 5, 00:10:11.346 "qid": 0, 00:10:11.346 "state": "enabled", 00:10:11.346 "thread": "nvmf_tgt_poll_group_000", 00:10:11.346 "listen_address": { 00:10:11.346 "trtype": "TCP", 00:10:11.346 "adrfam": "IPv4", 00:10:11.346 "traddr": "10.0.0.2", 00:10:11.346 "trsvcid": "4420" 00:10:11.346 }, 00:10:11.346 "peer_address": { 00:10:11.346 "trtype": "TCP", 00:10:11.346 "adrfam": "IPv4", 00:10:11.346 "traddr": "10.0.0.1", 00:10:11.346 "trsvcid": "53858" 00:10:11.346 }, 00:10:11.346 "auth": { 00:10:11.346 "state": "completed", 00:10:11.346 "digest": "sha256", 00:10:11.346 "dhgroup": "null" 00:10:11.346 } 00:10:11.346 } 00:10:11.346 ]' 00:10:11.346 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:11.605 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:11.605 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:11.605 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:11.605 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:11.605 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:11.605 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:11.605 09:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.863 09:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:10:12.799 09:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.799 09:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:12.799 09:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.799 09:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.799 09:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.799 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:12.799 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:12.799 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:13.058 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:13.417 00:10:13.417 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:13.417 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.417 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:13.691 { 00:10:13.691 "cntlid": 7, 00:10:13.691 "qid": 0, 00:10:13.691 "state": "enabled", 00:10:13.691 "thread": "nvmf_tgt_poll_group_000", 00:10:13.691 "listen_address": { 00:10:13.691 "trtype": "TCP", 00:10:13.691 "adrfam": "IPv4", 00:10:13.691 "traddr": "10.0.0.2", 00:10:13.691 "trsvcid": "4420" 00:10:13.691 }, 00:10:13.691 "peer_address": { 00:10:13.691 "trtype": "TCP", 00:10:13.691 "adrfam": "IPv4", 00:10:13.691 "traddr": "10.0.0.1", 00:10:13.691 "trsvcid": "53906" 00:10:13.691 }, 00:10:13.691 "auth": { 00:10:13.691 "state": "completed", 00:10:13.691 "digest": "sha256", 00:10:13.691 "dhgroup": "null" 00:10:13.691 } 00:10:13.691 } 00:10:13.691 ]' 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.691 09:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:13.691 09:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:13.691 09:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:13.691 09:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.691 09:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.691 09:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.949 09:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:10:14.882 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.882 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:14.882 09:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.882 09:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.882 09:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.883 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:14.883 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:14.883 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:14.883 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.140 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:15.398 00:10:15.398 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:15.398 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:15.398 09:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.655 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.655 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.655 09:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.655 09:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.655 09:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.655 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:15.655 { 00:10:15.655 "cntlid": 9, 00:10:15.655 "qid": 0, 00:10:15.655 "state": "enabled", 00:10:15.655 "thread": "nvmf_tgt_poll_group_000", 00:10:15.655 "listen_address": { 00:10:15.655 "trtype": "TCP", 00:10:15.655 "adrfam": "IPv4", 00:10:15.655 "traddr": "10.0.0.2", 00:10:15.655 "trsvcid": "4420" 00:10:15.655 }, 00:10:15.655 "peer_address": { 00:10:15.655 "trtype": "TCP", 00:10:15.655 "adrfam": "IPv4", 00:10:15.655 "traddr": "10.0.0.1", 00:10:15.655 "trsvcid": "54128" 00:10:15.655 }, 00:10:15.655 "auth": { 00:10:15.655 "state": "completed", 00:10:15.655 "digest": "sha256", 00:10:15.655 "dhgroup": "ffdhe2048" 00:10:15.655 } 00:10:15.655 } 00:10:15.655 ]' 00:10:15.655 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:15.912 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.912 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:15.912 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:15.912 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:15.912 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.912 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.912 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.168 09:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.102 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:17.668 00:10:17.668 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:17.668 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:17.668 09:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:17.951 { 00:10:17.951 "cntlid": 11, 00:10:17.951 "qid": 0, 00:10:17.951 "state": "enabled", 00:10:17.951 "thread": "nvmf_tgt_poll_group_000", 00:10:17.951 "listen_address": { 00:10:17.951 "trtype": "TCP", 00:10:17.951 "adrfam": "IPv4", 00:10:17.951 "traddr": "10.0.0.2", 00:10:17.951 "trsvcid": "4420" 00:10:17.951 }, 00:10:17.951 "peer_address": { 00:10:17.951 "trtype": "TCP", 00:10:17.951 "adrfam": "IPv4", 00:10:17.951 "traddr": "10.0.0.1", 00:10:17.951 "trsvcid": "54148" 00:10:17.951 }, 00:10:17.951 "auth": { 00:10:17.951 "state": "completed", 00:10:17.951 "digest": "sha256", 00:10:17.951 "dhgroup": "ffdhe2048" 00:10:17.951 } 00:10:17.951 } 00:10:17.951 ]' 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.951 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.208 09:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:10:18.799 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.799 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:18.799 09:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.799 09:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.057 09:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.315 09:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.315 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.315 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:19.572 00:10:19.572 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:19.572 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:19.572 09:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.830 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.830 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.830 09:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.830 09:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.831 09:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.831 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:19.831 { 00:10:19.831 "cntlid": 13, 00:10:19.831 "qid": 0, 00:10:19.831 "state": "enabled", 00:10:19.831 "thread": "nvmf_tgt_poll_group_000", 00:10:19.831 "listen_address": { 00:10:19.831 "trtype": "TCP", 00:10:19.831 "adrfam": "IPv4", 00:10:19.831 "traddr": "10.0.0.2", 00:10:19.831 "trsvcid": "4420" 00:10:19.831 }, 00:10:19.831 "peer_address": { 00:10:19.831 "trtype": "TCP", 00:10:19.831 "adrfam": "IPv4", 00:10:19.831 "traddr": "10.0.0.1", 00:10:19.831 "trsvcid": "54178" 00:10:19.831 }, 00:10:19.831 "auth": { 00:10:19.831 "state": "completed", 00:10:19.831 "digest": "sha256", 00:10:19.831 "dhgroup": "ffdhe2048" 00:10:19.831 } 00:10:19.831 } 00:10:19.831 ]' 00:10:19.831 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:19.831 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.831 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:19.831 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:19.831 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:20.089 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.089 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.089 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.346 09:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:20.912 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:21.170 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:21.428 00:10:21.428 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:21.428 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:21.686 09:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:21.944 { 00:10:21.944 "cntlid": 15, 00:10:21.944 "qid": 0, 00:10:21.944 "state": "enabled", 00:10:21.944 "thread": "nvmf_tgt_poll_group_000", 00:10:21.944 "listen_address": { 00:10:21.944 "trtype": "TCP", 00:10:21.944 "adrfam": "IPv4", 00:10:21.944 "traddr": "10.0.0.2", 00:10:21.944 "trsvcid": "4420" 00:10:21.944 }, 00:10:21.944 "peer_address": { 00:10:21.944 "trtype": "TCP", 00:10:21.944 "adrfam": "IPv4", 00:10:21.944 "traddr": "10.0.0.1", 00:10:21.944 "trsvcid": "54208" 00:10:21.944 }, 00:10:21.944 "auth": { 00:10:21.944 "state": "completed", 00:10:21.944 "digest": "sha256", 00:10:21.944 "dhgroup": "ffdhe2048" 00:10:21.944 } 00:10:21.944 } 00:10:21.944 ]' 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.944 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.254 09:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:10:22.834 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.834 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.092 09:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.350 09:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.350 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.350 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:23.608 00:10:23.608 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:23.608 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.608 09:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:23.865 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.865 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.865 09:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.865 09:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.865 09:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.865 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:23.865 { 00:10:23.865 "cntlid": 17, 00:10:23.865 "qid": 0, 00:10:23.865 "state": "enabled", 00:10:23.865 "thread": "nvmf_tgt_poll_group_000", 00:10:23.865 "listen_address": { 00:10:23.865 "trtype": "TCP", 00:10:23.865 "adrfam": "IPv4", 00:10:23.865 "traddr": "10.0.0.2", 00:10:23.865 "trsvcid": "4420" 00:10:23.865 }, 00:10:23.865 "peer_address": { 00:10:23.866 "trtype": "TCP", 00:10:23.866 "adrfam": "IPv4", 00:10:23.866 "traddr": "10.0.0.1", 00:10:23.866 "trsvcid": "54246" 00:10:23.866 }, 00:10:23.866 "auth": { 00:10:23.866 "state": "completed", 00:10:23.866 "digest": "sha256", 00:10:23.866 "dhgroup": "ffdhe3072" 00:10:23.866 } 00:10:23.866 } 00:10:23.866 ]' 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.866 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.123 09:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:25.057 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.337 09:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:25.595 00:10:25.595 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:25.595 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:25.595 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.851 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.851 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.851 09:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.851 09:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.851 09:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.851 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:25.851 { 00:10:25.851 "cntlid": 19, 00:10:25.851 "qid": 0, 00:10:25.851 "state": "enabled", 00:10:25.851 "thread": "nvmf_tgt_poll_group_000", 00:10:25.851 "listen_address": { 00:10:25.851 "trtype": "TCP", 00:10:25.851 "adrfam": "IPv4", 00:10:25.851 "traddr": "10.0.0.2", 00:10:25.851 "trsvcid": "4420" 00:10:25.851 }, 00:10:25.851 "peer_address": { 00:10:25.851 "trtype": "TCP", 00:10:25.851 "adrfam": "IPv4", 00:10:25.851 "traddr": "10.0.0.1", 00:10:25.851 "trsvcid": "56498" 00:10:25.851 }, 00:10:25.851 "auth": { 00:10:25.851 "state": "completed", 00:10:25.851 "digest": "sha256", 00:10:25.851 "dhgroup": "ffdhe3072" 00:10:25.851 } 00:10:25.851 } 00:10:25.851 ]' 00:10:25.851 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:26.109 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.109 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:26.109 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:26.109 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:26.109 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.109 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.109 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.368 09:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:27.304 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.562 09:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:27.853 00:10:27.853 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.853 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.853 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:28.112 { 00:10:28.112 "cntlid": 21, 00:10:28.112 "qid": 0, 00:10:28.112 "state": "enabled", 00:10:28.112 "thread": "nvmf_tgt_poll_group_000", 00:10:28.112 "listen_address": { 00:10:28.112 "trtype": "TCP", 00:10:28.112 "adrfam": "IPv4", 00:10:28.112 "traddr": "10.0.0.2", 00:10:28.112 "trsvcid": "4420" 00:10:28.112 }, 00:10:28.112 "peer_address": { 00:10:28.112 "trtype": "TCP", 00:10:28.112 "adrfam": "IPv4", 00:10:28.112 "traddr": "10.0.0.1", 00:10:28.112 "trsvcid": "56528" 00:10:28.112 }, 00:10:28.112 "auth": { 00:10:28.112 "state": "completed", 00:10:28.112 "digest": "sha256", 00:10:28.112 "dhgroup": "ffdhe3072" 00:10:28.112 } 00:10:28.112 } 00:10:28.112 ]' 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:28.112 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:28.371 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.371 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.371 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.629 09:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.194 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:29.453 09:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:30.018 00:10:30.018 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:30.018 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:30.018 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:30.277 { 00:10:30.277 "cntlid": 23, 00:10:30.277 "qid": 0, 00:10:30.277 "state": "enabled", 00:10:30.277 "thread": "nvmf_tgt_poll_group_000", 00:10:30.277 "listen_address": { 00:10:30.277 "trtype": "TCP", 00:10:30.277 "adrfam": "IPv4", 00:10:30.277 "traddr": "10.0.0.2", 00:10:30.277 "trsvcid": "4420" 00:10:30.277 }, 00:10:30.277 "peer_address": { 00:10:30.277 "trtype": "TCP", 00:10:30.277 "adrfam": "IPv4", 00:10:30.277 "traddr": "10.0.0.1", 00:10:30.277 "trsvcid": "56550" 00:10:30.277 }, 00:10:30.277 "auth": { 00:10:30.277 "state": "completed", 00:10:30.277 "digest": "sha256", 00:10:30.277 "dhgroup": "ffdhe3072" 00:10:30.277 } 00:10:30.277 } 00:10:30.277 ]' 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.277 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.536 09:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:31.471 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.729 09:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:31.987 00:10:31.987 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:31.987 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:31.987 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:32.248 { 00:10:32.248 "cntlid": 25, 00:10:32.248 "qid": 0, 00:10:32.248 "state": "enabled", 00:10:32.248 "thread": "nvmf_tgt_poll_group_000", 00:10:32.248 "listen_address": { 00:10:32.248 "trtype": "TCP", 00:10:32.248 "adrfam": "IPv4", 00:10:32.248 "traddr": "10.0.0.2", 00:10:32.248 "trsvcid": "4420" 00:10:32.248 }, 00:10:32.248 "peer_address": { 00:10:32.248 "trtype": "TCP", 00:10:32.248 "adrfam": "IPv4", 00:10:32.248 "traddr": "10.0.0.1", 00:10:32.248 "trsvcid": "56578" 00:10:32.248 }, 00:10:32.248 "auth": { 00:10:32.248 "state": "completed", 00:10:32.248 "digest": "sha256", 00:10:32.248 "dhgroup": "ffdhe4096" 00:10:32.248 } 00:10:32.248 } 00:10:32.248 ]' 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:32.248 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:32.510 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.510 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.510 09:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.769 09:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.337 09:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.904 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.162 00:10:34.162 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.162 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:34.162 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:34.420 { 00:10:34.420 "cntlid": 27, 00:10:34.420 "qid": 0, 00:10:34.420 "state": "enabled", 00:10:34.420 "thread": "nvmf_tgt_poll_group_000", 00:10:34.420 "listen_address": { 00:10:34.420 "trtype": "TCP", 00:10:34.420 "adrfam": "IPv4", 00:10:34.420 "traddr": "10.0.0.2", 00:10:34.420 "trsvcid": "4420" 00:10:34.420 }, 00:10:34.420 "peer_address": { 00:10:34.420 "trtype": "TCP", 00:10:34.420 "adrfam": "IPv4", 00:10:34.420 "traddr": "10.0.0.1", 00:10:34.420 "trsvcid": "56602" 00:10:34.420 }, 00:10:34.420 "auth": { 00:10:34.420 "state": "completed", 00:10:34.420 "digest": "sha256", 00:10:34.420 "dhgroup": "ffdhe4096" 00:10:34.420 } 00:10:34.420 } 00:10:34.420 ]' 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:34.420 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:34.679 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.679 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.679 09:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.937 09:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.507 09:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.763 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.328 00:10:36.328 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.328 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.328 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.585 { 00:10:36.585 "cntlid": 29, 00:10:36.585 "qid": 0, 00:10:36.585 "state": "enabled", 00:10:36.585 "thread": "nvmf_tgt_poll_group_000", 00:10:36.585 "listen_address": { 00:10:36.585 "trtype": "TCP", 00:10:36.585 "adrfam": "IPv4", 00:10:36.585 "traddr": "10.0.0.2", 00:10:36.585 "trsvcid": "4420" 00:10:36.585 }, 00:10:36.585 "peer_address": { 00:10:36.585 "trtype": "TCP", 00:10:36.585 "adrfam": "IPv4", 00:10:36.585 "traddr": "10.0.0.1", 00:10:36.585 "trsvcid": "50178" 00:10:36.585 }, 00:10:36.585 "auth": { 00:10:36.585 "state": "completed", 00:10:36.585 "digest": "sha256", 00:10:36.585 "dhgroup": "ffdhe4096" 00:10:36.585 } 00:10:36.585 } 00:10:36.585 ]' 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.585 09:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:36.585 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:36.585 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:36.842 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.842 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.842 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.099 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.663 09:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:37.920 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.177 00:10:38.177 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.177 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.177 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.434 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.434 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.434 09:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.434 09:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.434 09:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.434 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.434 { 00:10:38.434 "cntlid": 31, 00:10:38.434 "qid": 0, 00:10:38.434 "state": "enabled", 00:10:38.434 "thread": "nvmf_tgt_poll_group_000", 00:10:38.434 "listen_address": { 00:10:38.434 "trtype": "TCP", 00:10:38.434 "adrfam": "IPv4", 00:10:38.434 "traddr": "10.0.0.2", 00:10:38.434 "trsvcid": "4420" 00:10:38.434 }, 00:10:38.434 "peer_address": { 00:10:38.434 "trtype": "TCP", 00:10:38.434 "adrfam": "IPv4", 00:10:38.434 "traddr": "10.0.0.1", 00:10:38.434 "trsvcid": "50212" 00:10:38.434 }, 00:10:38.434 "auth": { 00:10:38.434 "state": "completed", 00:10:38.434 "digest": "sha256", 00:10:38.434 "dhgroup": "ffdhe4096" 00:10:38.435 } 00:10:38.435 } 00:10:38.435 ]' 00:10:38.435 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.435 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.435 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:38.692 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:38.692 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.692 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.692 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.692 09:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.949 09:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:39.514 09:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:39.789 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:39.789 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.789 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:39.789 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:39.789 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:39.789 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.790 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.790 09:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.790 09:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.790 09:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.790 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.790 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.368 00:10:40.368 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:40.368 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.368 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.625 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.625 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.625 09:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.625 09:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.625 09:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.625 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.625 { 00:10:40.626 "cntlid": 33, 00:10:40.626 "qid": 0, 00:10:40.626 "state": "enabled", 00:10:40.626 "thread": "nvmf_tgt_poll_group_000", 00:10:40.626 "listen_address": { 00:10:40.626 "trtype": "TCP", 00:10:40.626 "adrfam": "IPv4", 00:10:40.626 "traddr": "10.0.0.2", 00:10:40.626 "trsvcid": "4420" 00:10:40.626 }, 00:10:40.626 "peer_address": { 00:10:40.626 "trtype": "TCP", 00:10:40.626 "adrfam": "IPv4", 00:10:40.626 "traddr": "10.0.0.1", 00:10:40.626 "trsvcid": "50242" 00:10:40.626 }, 00:10:40.626 "auth": { 00:10:40.626 "state": "completed", 00:10:40.626 "digest": "sha256", 00:10:40.626 "dhgroup": "ffdhe6144" 00:10:40.626 } 00:10:40.626 } 00:10:40.626 ]' 00:10:40.626 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.626 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.626 09:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.626 09:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:40.626 09:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.626 09:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.626 09:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.626 09:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.883 09:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:41.814 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.071 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.328 00:10:42.328 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.328 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.328 09:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.586 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.586 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.586 09:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.586 09:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.586 09:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.586 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.586 { 00:10:42.586 "cntlid": 35, 00:10:42.586 "qid": 0, 00:10:42.586 "state": "enabled", 00:10:42.586 "thread": "nvmf_tgt_poll_group_000", 00:10:42.586 "listen_address": { 00:10:42.586 "trtype": "TCP", 00:10:42.586 "adrfam": "IPv4", 00:10:42.586 "traddr": "10.0.0.2", 00:10:42.586 "trsvcid": "4420" 00:10:42.586 }, 00:10:42.586 "peer_address": { 00:10:42.586 "trtype": "TCP", 00:10:42.586 "adrfam": "IPv4", 00:10:42.586 "traddr": "10.0.0.1", 00:10:42.586 "trsvcid": "50258" 00:10:42.586 }, 00:10:42.586 "auth": { 00:10:42.586 "state": "completed", 00:10:42.586 "digest": "sha256", 00:10:42.586 "dhgroup": "ffdhe6144" 00:10:42.586 } 00:10:42.586 } 00:10:42.586 ]' 00:10:42.586 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.844 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.844 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.844 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:42.844 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.844 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.844 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.844 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.102 09:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:44.035 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.293 09:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.551 00:10:44.551 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.551 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.551 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.118 { 00:10:45.118 "cntlid": 37, 00:10:45.118 "qid": 0, 00:10:45.118 "state": "enabled", 00:10:45.118 "thread": "nvmf_tgt_poll_group_000", 00:10:45.118 "listen_address": { 00:10:45.118 "trtype": "TCP", 00:10:45.118 "adrfam": "IPv4", 00:10:45.118 "traddr": "10.0.0.2", 00:10:45.118 "trsvcid": "4420" 00:10:45.118 }, 00:10:45.118 "peer_address": { 00:10:45.118 "trtype": "TCP", 00:10:45.118 "adrfam": "IPv4", 00:10:45.118 "traddr": "10.0.0.1", 00:10:45.118 "trsvcid": "40852" 00:10:45.118 }, 00:10:45.118 "auth": { 00:10:45.118 "state": "completed", 00:10:45.118 "digest": "sha256", 00:10:45.118 "dhgroup": "ffdhe6144" 00:10:45.118 } 00:10:45.118 } 00:10:45.118 ]' 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.118 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.376 09:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:10:45.942 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.943 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:45.943 09:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.943 09:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.943 09:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.943 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.943 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.943 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.201 09:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.766 00:10:46.766 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.766 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.766 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.024 { 00:10:47.024 "cntlid": 39, 00:10:47.024 "qid": 0, 00:10:47.024 "state": "enabled", 00:10:47.024 "thread": "nvmf_tgt_poll_group_000", 00:10:47.024 "listen_address": { 00:10:47.024 "trtype": "TCP", 00:10:47.024 "adrfam": "IPv4", 00:10:47.024 "traddr": "10.0.0.2", 00:10:47.024 "trsvcid": "4420" 00:10:47.024 }, 00:10:47.024 "peer_address": { 00:10:47.024 "trtype": "TCP", 00:10:47.024 "adrfam": "IPv4", 00:10:47.024 "traddr": "10.0.0.1", 00:10:47.024 "trsvcid": "40862" 00:10:47.024 }, 00:10:47.024 "auth": { 00:10:47.024 "state": "completed", 00:10:47.024 "digest": "sha256", 00:10:47.024 "dhgroup": "ffdhe6144" 00:10:47.024 } 00:10:47.024 } 00:10:47.024 ]' 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.024 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.283 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:47.283 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.283 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.283 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.283 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.541 09:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.107 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.366 09:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.300 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.300 { 00:10:49.300 "cntlid": 41, 00:10:49.300 "qid": 0, 00:10:49.300 "state": "enabled", 00:10:49.300 "thread": "nvmf_tgt_poll_group_000", 00:10:49.300 "listen_address": { 00:10:49.300 "trtype": "TCP", 00:10:49.300 "adrfam": "IPv4", 00:10:49.300 "traddr": "10.0.0.2", 00:10:49.300 "trsvcid": "4420" 00:10:49.300 }, 00:10:49.300 "peer_address": { 00:10:49.300 "trtype": "TCP", 00:10:49.300 "adrfam": "IPv4", 00:10:49.300 "traddr": "10.0.0.1", 00:10:49.300 "trsvcid": "40878" 00:10:49.300 }, 00:10:49.300 "auth": { 00:10:49.300 "state": "completed", 00:10:49.300 "digest": "sha256", 00:10:49.300 "dhgroup": "ffdhe8192" 00:10:49.300 } 00:10:49.300 } 00:10:49.300 ]' 00:10:49.300 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.558 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.558 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.558 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:49.558 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.558 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.558 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.559 09:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.817 09:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:10:50.384 09:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.642 09:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:50.642 09:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.642 09:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.642 09:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.642 09:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.642 09:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.642 09:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.901 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:51.469 00:10:51.469 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.469 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.469 09:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.727 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.727 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.727 09:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.727 09:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.727 09:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.727 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.727 { 00:10:51.727 "cntlid": 43, 00:10:51.727 "qid": 0, 00:10:51.727 "state": "enabled", 00:10:51.727 "thread": "nvmf_tgt_poll_group_000", 00:10:51.727 "listen_address": { 00:10:51.727 "trtype": "TCP", 00:10:51.727 "adrfam": "IPv4", 00:10:51.727 "traddr": "10.0.0.2", 00:10:51.727 "trsvcid": "4420" 00:10:51.727 }, 00:10:51.727 "peer_address": { 00:10:51.727 "trtype": "TCP", 00:10:51.727 "adrfam": "IPv4", 00:10:51.727 "traddr": "10.0.0.1", 00:10:51.727 "trsvcid": "40908" 00:10:51.727 }, 00:10:51.727 "auth": { 00:10:51.727 "state": "completed", 00:10:51.727 "digest": "sha256", 00:10:51.727 "dhgroup": "ffdhe8192" 00:10:51.727 } 00:10:51.727 } 00:10:51.727 ]' 00:10:51.727 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.985 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.985 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.985 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:51.985 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.985 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.985 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.985 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.243 09:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:53.176 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:53.434 09:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.003 00:10:54.003 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.003 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.003 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.270 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.270 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.270 09:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.270 09:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.270 09:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.270 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.270 { 00:10:54.270 "cntlid": 45, 00:10:54.270 "qid": 0, 00:10:54.270 "state": "enabled", 00:10:54.270 "thread": "nvmf_tgt_poll_group_000", 00:10:54.270 "listen_address": { 00:10:54.270 "trtype": "TCP", 00:10:54.270 "adrfam": "IPv4", 00:10:54.270 "traddr": "10.0.0.2", 00:10:54.270 "trsvcid": "4420" 00:10:54.270 }, 00:10:54.270 "peer_address": { 00:10:54.270 "trtype": "TCP", 00:10:54.270 "adrfam": "IPv4", 00:10:54.270 "traddr": "10.0.0.1", 00:10:54.270 "trsvcid": "40922" 00:10:54.270 }, 00:10:54.270 "auth": { 00:10:54.270 "state": "completed", 00:10:54.270 "digest": "sha256", 00:10:54.270 "dhgroup": "ffdhe8192" 00:10:54.270 } 00:10:54.270 } 00:10:54.270 ]' 00:10:54.270 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.529 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.529 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.529 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:54.529 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.529 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.529 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.529 09:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.788 09:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:10:55.354 09:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.354 09:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:55.354 09:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.354 09:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.613 09:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.613 09:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.613 09:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:55.613 09:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.613 09:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.871 09:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.872 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:55.872 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:56.438 00:10:56.438 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.438 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.438 09:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.695 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.695 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.695 09:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.695 09:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.695 09:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.695 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.695 { 00:10:56.695 "cntlid": 47, 00:10:56.695 "qid": 0, 00:10:56.695 "state": "enabled", 00:10:56.695 "thread": "nvmf_tgt_poll_group_000", 00:10:56.695 "listen_address": { 00:10:56.695 "trtype": "TCP", 00:10:56.695 "adrfam": "IPv4", 00:10:56.695 "traddr": "10.0.0.2", 00:10:56.695 "trsvcid": "4420" 00:10:56.695 }, 00:10:56.695 "peer_address": { 00:10:56.695 "trtype": "TCP", 00:10:56.695 "adrfam": "IPv4", 00:10:56.695 "traddr": "10.0.0.1", 00:10:56.695 "trsvcid": "42000" 00:10:56.695 }, 00:10:56.695 "auth": { 00:10:56.695 "state": "completed", 00:10:56.695 "digest": "sha256", 00:10:56.695 "dhgroup": "ffdhe8192" 00:10:56.695 } 00:10:56.695 } 00:10:56.695 ]' 00:10:56.695 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.952 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.952 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.952 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:56.952 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.952 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.952 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.952 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.211 09:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.145 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.709 00:10:58.709 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.709 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.709 09:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.972 { 00:10:58.972 "cntlid": 49, 00:10:58.972 "qid": 0, 00:10:58.972 "state": "enabled", 00:10:58.972 "thread": "nvmf_tgt_poll_group_000", 00:10:58.972 "listen_address": { 00:10:58.972 "trtype": "TCP", 00:10:58.972 "adrfam": "IPv4", 00:10:58.972 "traddr": "10.0.0.2", 00:10:58.972 "trsvcid": "4420" 00:10:58.972 }, 00:10:58.972 "peer_address": { 00:10:58.972 "trtype": "TCP", 00:10:58.972 "adrfam": "IPv4", 00:10:58.972 "traddr": "10.0.0.1", 00:10:58.972 "trsvcid": "42040" 00:10:58.972 }, 00:10:58.972 "auth": { 00:10:58.972 "state": "completed", 00:10:58.972 "digest": "sha384", 00:10:58.972 "dhgroup": "null" 00:10:58.972 } 00:10:58.972 } 00:10:58.972 ]' 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.972 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.229 09:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:00.218 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.219 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:00.219 09:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.219 09:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.219 09:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.219 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.219 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:00.219 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:00.476 09:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:00.734 00:11:00.734 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.734 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.734 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.991 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.991 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.991 09:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.991 09:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.991 09:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.991 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.991 { 00:11:00.991 "cntlid": 51, 00:11:00.991 "qid": 0, 00:11:00.991 "state": "enabled", 00:11:00.991 "thread": "nvmf_tgt_poll_group_000", 00:11:00.991 "listen_address": { 00:11:00.991 "trtype": "TCP", 00:11:00.991 "adrfam": "IPv4", 00:11:00.991 "traddr": "10.0.0.2", 00:11:00.991 "trsvcid": "4420" 00:11:00.991 }, 00:11:00.991 "peer_address": { 00:11:00.991 "trtype": "TCP", 00:11:00.991 "adrfam": "IPv4", 00:11:00.991 "traddr": "10.0.0.1", 00:11:00.991 "trsvcid": "42068" 00:11:00.991 }, 00:11:00.991 "auth": { 00:11:00.991 "state": "completed", 00:11:00.991 "digest": "sha384", 00:11:00.991 "dhgroup": "null" 00:11:00.991 } 00:11:00.991 } 00:11:00.991 ]' 00:11:00.992 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.992 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.992 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.249 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:01.249 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.249 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.249 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.249 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.507 09:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:02.463 09:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.028 00:11:03.028 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.028 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.028 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.286 { 00:11:03.286 "cntlid": 53, 00:11:03.286 "qid": 0, 00:11:03.286 "state": "enabled", 00:11:03.286 "thread": "nvmf_tgt_poll_group_000", 00:11:03.286 "listen_address": { 00:11:03.286 "trtype": "TCP", 00:11:03.286 "adrfam": "IPv4", 00:11:03.286 "traddr": "10.0.0.2", 00:11:03.286 "trsvcid": "4420" 00:11:03.286 }, 00:11:03.286 "peer_address": { 00:11:03.286 "trtype": "TCP", 00:11:03.286 "adrfam": "IPv4", 00:11:03.286 "traddr": "10.0.0.1", 00:11:03.286 "trsvcid": "42104" 00:11:03.286 }, 00:11:03.286 "auth": { 00:11:03.286 "state": "completed", 00:11:03.286 "digest": "sha384", 00:11:03.286 "dhgroup": "null" 00:11:03.286 } 00:11:03.286 } 00:11:03.286 ]' 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.286 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.543 09:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.533 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.804 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:04.804 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.804 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:04.804 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:04.804 09:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:04.804 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.804 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:11:04.804 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.804 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.804 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.804 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:04.804 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:05.063 00:11:05.063 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.063 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.063 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.321 { 00:11:05.321 "cntlid": 55, 00:11:05.321 "qid": 0, 00:11:05.321 "state": "enabled", 00:11:05.321 "thread": "nvmf_tgt_poll_group_000", 00:11:05.321 "listen_address": { 00:11:05.321 "trtype": "TCP", 00:11:05.321 "adrfam": "IPv4", 00:11:05.321 "traddr": "10.0.0.2", 00:11:05.321 "trsvcid": "4420" 00:11:05.321 }, 00:11:05.321 "peer_address": { 00:11:05.321 "trtype": "TCP", 00:11:05.321 "adrfam": "IPv4", 00:11:05.321 "traddr": "10.0.0.1", 00:11:05.321 "trsvcid": "54160" 00:11:05.321 }, 00:11:05.321 "auth": { 00:11:05.321 "state": "completed", 00:11:05.321 "digest": "sha384", 00:11:05.321 "dhgroup": "null" 00:11:05.321 } 00:11:05.321 } 00:11:05.321 ]' 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:05.321 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.579 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.579 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.579 09:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.837 09:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:06.405 09:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:06.662 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:06.662 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.662 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.663 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.921 00:11:06.921 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.921 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.921 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.179 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.436 { 00:11:07.436 "cntlid": 57, 00:11:07.436 "qid": 0, 00:11:07.436 "state": "enabled", 00:11:07.436 "thread": "nvmf_tgt_poll_group_000", 00:11:07.436 "listen_address": { 00:11:07.436 "trtype": "TCP", 00:11:07.436 "adrfam": "IPv4", 00:11:07.436 "traddr": "10.0.0.2", 00:11:07.436 "trsvcid": "4420" 00:11:07.436 }, 00:11:07.436 "peer_address": { 00:11:07.436 "trtype": "TCP", 00:11:07.436 "adrfam": "IPv4", 00:11:07.436 "traddr": "10.0.0.1", 00:11:07.436 "trsvcid": "54192" 00:11:07.436 }, 00:11:07.436 "auth": { 00:11:07.436 "state": "completed", 00:11:07.436 "digest": "sha384", 00:11:07.436 "dhgroup": "ffdhe2048" 00:11:07.436 } 00:11:07.436 } 00:11:07.436 ]' 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.436 09:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.693 09:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:08.628 09:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.886 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.144 00:11:09.144 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.144 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.144 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.426 { 00:11:09.426 "cntlid": 59, 00:11:09.426 "qid": 0, 00:11:09.426 "state": "enabled", 00:11:09.426 "thread": "nvmf_tgt_poll_group_000", 00:11:09.426 "listen_address": { 00:11:09.426 "trtype": "TCP", 00:11:09.426 "adrfam": "IPv4", 00:11:09.426 "traddr": "10.0.0.2", 00:11:09.426 "trsvcid": "4420" 00:11:09.426 }, 00:11:09.426 "peer_address": { 00:11:09.426 "trtype": "TCP", 00:11:09.426 "adrfam": "IPv4", 00:11:09.426 "traddr": "10.0.0.1", 00:11:09.426 "trsvcid": "54212" 00:11:09.426 }, 00:11:09.426 "auth": { 00:11:09.426 "state": "completed", 00:11:09.426 "digest": "sha384", 00:11:09.426 "dhgroup": "ffdhe2048" 00:11:09.426 } 00:11:09.426 } 00:11:09.426 ]' 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.426 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.701 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:09.701 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.701 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.701 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.701 09:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.958 09:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.524 09:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.782 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.349 00:11:11.349 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.349 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.349 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.607 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.607 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.607 09:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.607 09:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.607 09:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.607 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.607 { 00:11:11.607 "cntlid": 61, 00:11:11.607 "qid": 0, 00:11:11.608 "state": "enabled", 00:11:11.608 "thread": "nvmf_tgt_poll_group_000", 00:11:11.608 "listen_address": { 00:11:11.608 "trtype": "TCP", 00:11:11.608 "adrfam": "IPv4", 00:11:11.608 "traddr": "10.0.0.2", 00:11:11.608 "trsvcid": "4420" 00:11:11.608 }, 00:11:11.608 "peer_address": { 00:11:11.608 "trtype": "TCP", 00:11:11.608 "adrfam": "IPv4", 00:11:11.608 "traddr": "10.0.0.1", 00:11:11.608 "trsvcid": "54236" 00:11:11.608 }, 00:11:11.608 "auth": { 00:11:11.608 "state": "completed", 00:11:11.608 "digest": "sha384", 00:11:11.608 "dhgroup": "ffdhe2048" 00:11:11.608 } 00:11:11.608 } 00:11:11.608 ]' 00:11:11.608 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.608 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.608 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.608 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.608 09:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.608 09:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.608 09:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.608 09:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.174 09:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.740 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:12.998 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:13.256 00:11:13.256 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.256 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.256 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.514 { 00:11:13.514 "cntlid": 63, 00:11:13.514 "qid": 0, 00:11:13.514 "state": "enabled", 00:11:13.514 "thread": "nvmf_tgt_poll_group_000", 00:11:13.514 "listen_address": { 00:11:13.514 "trtype": "TCP", 00:11:13.514 "adrfam": "IPv4", 00:11:13.514 "traddr": "10.0.0.2", 00:11:13.514 "trsvcid": "4420" 00:11:13.514 }, 00:11:13.514 "peer_address": { 00:11:13.514 "trtype": "TCP", 00:11:13.514 "adrfam": "IPv4", 00:11:13.514 "traddr": "10.0.0.1", 00:11:13.514 "trsvcid": "54260" 00:11:13.514 }, 00:11:13.514 "auth": { 00:11:13.514 "state": "completed", 00:11:13.514 "digest": "sha384", 00:11:13.514 "dhgroup": "ffdhe2048" 00:11:13.514 } 00:11:13.514 } 00:11:13.514 ]' 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.514 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.772 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.772 09:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.772 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.772 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.772 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.030 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:11:14.595 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:14.596 09:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.854 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.439 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.440 { 00:11:15.440 "cntlid": 65, 00:11:15.440 "qid": 0, 00:11:15.440 "state": "enabled", 00:11:15.440 "thread": "nvmf_tgt_poll_group_000", 00:11:15.440 "listen_address": { 00:11:15.440 "trtype": "TCP", 00:11:15.440 "adrfam": "IPv4", 00:11:15.440 "traddr": "10.0.0.2", 00:11:15.440 "trsvcid": "4420" 00:11:15.440 }, 00:11:15.440 "peer_address": { 00:11:15.440 "trtype": "TCP", 00:11:15.440 "adrfam": "IPv4", 00:11:15.440 "traddr": "10.0.0.1", 00:11:15.440 "trsvcid": "38886" 00:11:15.440 }, 00:11:15.440 "auth": { 00:11:15.440 "state": "completed", 00:11:15.440 "digest": "sha384", 00:11:15.440 "dhgroup": "ffdhe3072" 00:11:15.440 } 00:11:15.440 } 00:11:15.440 ]' 00:11:15.440 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.698 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.698 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.698 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:15.698 09:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.698 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.698 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.698 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.955 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:16.521 09:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.778 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.779 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.388 00:11:17.388 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.388 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.388 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:17.645 { 00:11:17.645 "cntlid": 67, 00:11:17.645 "qid": 0, 00:11:17.645 "state": "enabled", 00:11:17.645 "thread": "nvmf_tgt_poll_group_000", 00:11:17.645 "listen_address": { 00:11:17.645 "trtype": "TCP", 00:11:17.645 "adrfam": "IPv4", 00:11:17.645 "traddr": "10.0.0.2", 00:11:17.645 "trsvcid": "4420" 00:11:17.645 }, 00:11:17.645 "peer_address": { 00:11:17.645 "trtype": "TCP", 00:11:17.645 "adrfam": "IPv4", 00:11:17.645 "traddr": "10.0.0.1", 00:11:17.645 "trsvcid": "38896" 00:11:17.645 }, 00:11:17.645 "auth": { 00:11:17.645 "state": "completed", 00:11:17.645 "digest": "sha384", 00:11:17.645 "dhgroup": "ffdhe3072" 00:11:17.645 } 00:11:17.645 } 00:11:17.645 ]' 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:17.645 09:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:17.645 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.645 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.645 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.902 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:11:18.466 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.466 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:18.466 09:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.466 09:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.723 09:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.723 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:18.723 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:18.723 09:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.980 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.237 00:11:19.237 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:19.237 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.237 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.495 { 00:11:19.495 "cntlid": 69, 00:11:19.495 "qid": 0, 00:11:19.495 "state": "enabled", 00:11:19.495 "thread": "nvmf_tgt_poll_group_000", 00:11:19.495 "listen_address": { 00:11:19.495 "trtype": "TCP", 00:11:19.495 "adrfam": "IPv4", 00:11:19.495 "traddr": "10.0.0.2", 00:11:19.495 "trsvcid": "4420" 00:11:19.495 }, 00:11:19.495 "peer_address": { 00:11:19.495 "trtype": "TCP", 00:11:19.495 "adrfam": "IPv4", 00:11:19.495 "traddr": "10.0.0.1", 00:11:19.495 "trsvcid": "38926" 00:11:19.495 }, 00:11:19.495 "auth": { 00:11:19.495 "state": "completed", 00:11:19.495 "digest": "sha384", 00:11:19.495 "dhgroup": "ffdhe3072" 00:11:19.495 } 00:11:19.495 } 00:11:19.495 ]' 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.495 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.752 09:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:19.752 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.752 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.752 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.752 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.009 09:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:11:20.610 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.610 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:20.611 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.611 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.611 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.611 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:20.611 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.611 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:20.871 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:21.436 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.436 { 00:11:21.436 "cntlid": 71, 00:11:21.436 "qid": 0, 00:11:21.436 "state": "enabled", 00:11:21.436 "thread": "nvmf_tgt_poll_group_000", 00:11:21.436 "listen_address": { 00:11:21.436 "trtype": "TCP", 00:11:21.436 "adrfam": "IPv4", 00:11:21.436 "traddr": "10.0.0.2", 00:11:21.436 "trsvcid": "4420" 00:11:21.436 }, 00:11:21.436 "peer_address": { 00:11:21.436 "trtype": "TCP", 00:11:21.436 "adrfam": "IPv4", 00:11:21.436 "traddr": "10.0.0.1", 00:11:21.436 "trsvcid": "38942" 00:11:21.436 }, 00:11:21.436 "auth": { 00:11:21.436 "state": "completed", 00:11:21.436 "digest": "sha384", 00:11:21.436 "dhgroup": "ffdhe3072" 00:11:21.436 } 00:11:21.436 } 00:11:21.436 ]' 00:11:21.436 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.698 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.698 09:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.698 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.698 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.698 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.698 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.698 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.263 09:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:22.828 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.085 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.342 00:11:23.342 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.342 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.342 09:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.599 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.856 { 00:11:23.856 "cntlid": 73, 00:11:23.856 "qid": 0, 00:11:23.856 "state": "enabled", 00:11:23.856 "thread": "nvmf_tgt_poll_group_000", 00:11:23.856 "listen_address": { 00:11:23.856 "trtype": "TCP", 00:11:23.856 "adrfam": "IPv4", 00:11:23.856 "traddr": "10.0.0.2", 00:11:23.856 "trsvcid": "4420" 00:11:23.856 }, 00:11:23.856 "peer_address": { 00:11:23.856 "trtype": "TCP", 00:11:23.856 "adrfam": "IPv4", 00:11:23.856 "traddr": "10.0.0.1", 00:11:23.856 "trsvcid": "38974" 00:11:23.856 }, 00:11:23.856 "auth": { 00:11:23.856 "state": "completed", 00:11:23.856 "digest": "sha384", 00:11:23.856 "dhgroup": "ffdhe4096" 00:11:23.856 } 00:11:23.856 } 00:11:23.856 ]' 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.856 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.114 09:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:24.678 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.935 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:24.936 09:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.936 09:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.936 09:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.936 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.936 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.936 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.194 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.453 00:11:25.453 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.453 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.453 09:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.711 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.711 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.711 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.711 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.711 09:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.711 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.711 { 00:11:25.711 "cntlid": 75, 00:11:25.711 "qid": 0, 00:11:25.711 "state": "enabled", 00:11:25.711 "thread": "nvmf_tgt_poll_group_000", 00:11:25.711 "listen_address": { 00:11:25.711 "trtype": "TCP", 00:11:25.711 "adrfam": "IPv4", 00:11:25.711 "traddr": "10.0.0.2", 00:11:25.711 "trsvcid": "4420" 00:11:25.711 }, 00:11:25.711 "peer_address": { 00:11:25.711 "trtype": "TCP", 00:11:25.711 "adrfam": "IPv4", 00:11:25.711 "traddr": "10.0.0.1", 00:11:25.711 "trsvcid": "57532" 00:11:25.711 }, 00:11:25.711 "auth": { 00:11:25.711 "state": "completed", 00:11:25.711 "digest": "sha384", 00:11:25.711 "dhgroup": "ffdhe4096" 00:11:25.711 } 00:11:25.711 } 00:11:25.711 ]' 00:11:25.711 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.969 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.969 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.969 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:25.969 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.969 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.969 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.969 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.227 09:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:27.161 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.420 09:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.679 00:11:27.679 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.679 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.679 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.937 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.937 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.937 09:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.937 09:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.937 09:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.937 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.937 { 00:11:27.937 "cntlid": 77, 00:11:27.937 "qid": 0, 00:11:27.937 "state": "enabled", 00:11:27.937 "thread": "nvmf_tgt_poll_group_000", 00:11:27.937 "listen_address": { 00:11:27.937 "trtype": "TCP", 00:11:27.937 "adrfam": "IPv4", 00:11:27.937 "traddr": "10.0.0.2", 00:11:27.937 "trsvcid": "4420" 00:11:27.937 }, 00:11:27.937 "peer_address": { 00:11:27.937 "trtype": "TCP", 00:11:27.937 "adrfam": "IPv4", 00:11:27.937 "traddr": "10.0.0.1", 00:11:27.937 "trsvcid": "57562" 00:11:27.937 }, 00:11:27.937 "auth": { 00:11:27.937 "state": "completed", 00:11:27.937 "digest": "sha384", 00:11:27.937 "dhgroup": "ffdhe4096" 00:11:27.937 } 00:11:27.937 } 00:11:27.937 ]' 00:11:27.937 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.195 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.195 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.195 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:28.195 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.195 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.195 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.195 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.453 09:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:29.505 09:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:30.115 00:11:30.115 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.115 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.115 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.115 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.373 { 00:11:30.373 "cntlid": 79, 00:11:30.373 "qid": 0, 00:11:30.373 "state": "enabled", 00:11:30.373 "thread": "nvmf_tgt_poll_group_000", 00:11:30.373 "listen_address": { 00:11:30.373 "trtype": "TCP", 00:11:30.373 "adrfam": "IPv4", 00:11:30.373 "traddr": "10.0.0.2", 00:11:30.373 "trsvcid": "4420" 00:11:30.373 }, 00:11:30.373 "peer_address": { 00:11:30.373 "trtype": "TCP", 00:11:30.373 "adrfam": "IPv4", 00:11:30.373 "traddr": "10.0.0.1", 00:11:30.373 "trsvcid": "57602" 00:11:30.373 }, 00:11:30.373 "auth": { 00:11:30.373 "state": "completed", 00:11:30.373 "digest": "sha384", 00:11:30.373 "dhgroup": "ffdhe4096" 00:11:30.373 } 00:11:30.373 } 00:11:30.373 ]' 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.373 09:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.631 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:31.566 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.567 09:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.136 00:11:32.136 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.136 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.136 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.394 { 00:11:32.394 "cntlid": 81, 00:11:32.394 "qid": 0, 00:11:32.394 "state": "enabled", 00:11:32.394 "thread": "nvmf_tgt_poll_group_000", 00:11:32.394 "listen_address": { 00:11:32.394 "trtype": "TCP", 00:11:32.394 "adrfam": "IPv4", 00:11:32.394 "traddr": "10.0.0.2", 00:11:32.394 "trsvcid": "4420" 00:11:32.394 }, 00:11:32.394 "peer_address": { 00:11:32.394 "trtype": "TCP", 00:11:32.394 "adrfam": "IPv4", 00:11:32.394 "traddr": "10.0.0.1", 00:11:32.394 "trsvcid": "57632" 00:11:32.394 }, 00:11:32.394 "auth": { 00:11:32.394 "state": "completed", 00:11:32.394 "digest": "sha384", 00:11:32.394 "dhgroup": "ffdhe6144" 00:11:32.394 } 00:11:32.394 } 00:11:32.394 ]' 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:32.394 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.653 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.653 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.653 09:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.912 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:33.478 09:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.735 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.336 00:11:34.336 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.336 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.336 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.594 { 00:11:34.594 "cntlid": 83, 00:11:34.594 "qid": 0, 00:11:34.594 "state": "enabled", 00:11:34.594 "thread": "nvmf_tgt_poll_group_000", 00:11:34.594 "listen_address": { 00:11:34.594 "trtype": "TCP", 00:11:34.594 "adrfam": "IPv4", 00:11:34.594 "traddr": "10.0.0.2", 00:11:34.594 "trsvcid": "4420" 00:11:34.594 }, 00:11:34.594 "peer_address": { 00:11:34.594 "trtype": "TCP", 00:11:34.594 "adrfam": "IPv4", 00:11:34.594 "traddr": "10.0.0.1", 00:11:34.594 "trsvcid": "57664" 00:11:34.594 }, 00:11:34.594 "auth": { 00:11:34.594 "state": "completed", 00:11:34.594 "digest": "sha384", 00:11:34.594 "dhgroup": "ffdhe6144" 00:11:34.594 } 00:11:34.594 } 00:11:34.594 ]' 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.594 09:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.594 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:34.594 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.850 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.850 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.850 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.107 09:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:35.671 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.926 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.489 00:11:36.489 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.489 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.489 09:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.746 { 00:11:36.746 "cntlid": 85, 00:11:36.746 "qid": 0, 00:11:36.746 "state": "enabled", 00:11:36.746 "thread": "nvmf_tgt_poll_group_000", 00:11:36.746 "listen_address": { 00:11:36.746 "trtype": "TCP", 00:11:36.746 "adrfam": "IPv4", 00:11:36.746 "traddr": "10.0.0.2", 00:11:36.746 "trsvcid": "4420" 00:11:36.746 }, 00:11:36.746 "peer_address": { 00:11:36.746 "trtype": "TCP", 00:11:36.746 "adrfam": "IPv4", 00:11:36.746 "traddr": "10.0.0.1", 00:11:36.746 "trsvcid": "58822" 00:11:36.746 }, 00:11:36.746 "auth": { 00:11:36.746 "state": "completed", 00:11:36.746 "digest": "sha384", 00:11:36.746 "dhgroup": "ffdhe6144" 00:11:36.746 } 00:11:36.746 } 00:11:36.746 ]' 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:36.746 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.003 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.003 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.003 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.260 09:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.825 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.082 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:38.648 00:11:38.648 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.648 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.648 09:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.906 { 00:11:38.906 "cntlid": 87, 00:11:38.906 "qid": 0, 00:11:38.906 "state": "enabled", 00:11:38.906 "thread": "nvmf_tgt_poll_group_000", 00:11:38.906 "listen_address": { 00:11:38.906 "trtype": "TCP", 00:11:38.906 "adrfam": "IPv4", 00:11:38.906 "traddr": "10.0.0.2", 00:11:38.906 "trsvcid": "4420" 00:11:38.906 }, 00:11:38.906 "peer_address": { 00:11:38.906 "trtype": "TCP", 00:11:38.906 "adrfam": "IPv4", 00:11:38.906 "traddr": "10.0.0.1", 00:11:38.906 "trsvcid": "58846" 00:11:38.906 }, 00:11:38.906 "auth": { 00:11:38.906 "state": "completed", 00:11:38.906 "digest": "sha384", 00:11:38.906 "dhgroup": "ffdhe6144" 00:11:38.906 } 00:11:38.906 } 00:11:38.906 ]' 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.906 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.164 09:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.098 09:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.031 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.031 { 00:11:41.031 "cntlid": 89, 00:11:41.031 "qid": 0, 00:11:41.031 "state": "enabled", 00:11:41.031 "thread": "nvmf_tgt_poll_group_000", 00:11:41.031 "listen_address": { 00:11:41.031 "trtype": "TCP", 00:11:41.031 "adrfam": "IPv4", 00:11:41.031 "traddr": "10.0.0.2", 00:11:41.031 "trsvcid": "4420" 00:11:41.031 }, 00:11:41.031 "peer_address": { 00:11:41.031 "trtype": "TCP", 00:11:41.031 "adrfam": "IPv4", 00:11:41.031 "traddr": "10.0.0.1", 00:11:41.031 "trsvcid": "58862" 00:11:41.031 }, 00:11:41.031 "auth": { 00:11:41.031 "state": "completed", 00:11:41.031 "digest": "sha384", 00:11:41.031 "dhgroup": "ffdhe8192" 00:11:41.031 } 00:11:41.031 } 00:11:41.031 ]' 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.031 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.287 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:41.287 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.287 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.287 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.287 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.544 09:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.108 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.365 09:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.296 00:11:43.296 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.296 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.297 { 00:11:43.297 "cntlid": 91, 00:11:43.297 "qid": 0, 00:11:43.297 "state": "enabled", 00:11:43.297 "thread": "nvmf_tgt_poll_group_000", 00:11:43.297 "listen_address": { 00:11:43.297 "trtype": "TCP", 00:11:43.297 "adrfam": "IPv4", 00:11:43.297 "traddr": "10.0.0.2", 00:11:43.297 "trsvcid": "4420" 00:11:43.297 }, 00:11:43.297 "peer_address": { 00:11:43.297 "trtype": "TCP", 00:11:43.297 "adrfam": "IPv4", 00:11:43.297 "traddr": "10.0.0.1", 00:11:43.297 "trsvcid": "58896" 00:11:43.297 }, 00:11:43.297 "auth": { 00:11:43.297 "state": "completed", 00:11:43.297 "digest": "sha384", 00:11:43.297 "dhgroup": "ffdhe8192" 00:11:43.297 } 00:11:43.297 } 00:11:43.297 ]' 00:11:43.297 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.554 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.554 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.554 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:43.554 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.554 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.554 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.554 09:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.810 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.742 09:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.742 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.306 00:11:45.306 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.306 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.306 09:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.565 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.565 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.565 09:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.565 09:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.823 09:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.823 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.823 { 00:11:45.823 "cntlid": 93, 00:11:45.823 "qid": 0, 00:11:45.823 "state": "enabled", 00:11:45.823 "thread": "nvmf_tgt_poll_group_000", 00:11:45.823 "listen_address": { 00:11:45.823 "trtype": "TCP", 00:11:45.823 "adrfam": "IPv4", 00:11:45.824 "traddr": "10.0.0.2", 00:11:45.824 "trsvcid": "4420" 00:11:45.824 }, 00:11:45.824 "peer_address": { 00:11:45.824 "trtype": "TCP", 00:11:45.824 "adrfam": "IPv4", 00:11:45.824 "traddr": "10.0.0.1", 00:11:45.824 "trsvcid": "43728" 00:11:45.824 }, 00:11:45.824 "auth": { 00:11:45.824 "state": "completed", 00:11:45.824 "digest": "sha384", 00:11:45.824 "dhgroup": "ffdhe8192" 00:11:45.824 } 00:11:45.824 } 00:11:45.824 ]' 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.824 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.081 09:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:11:46.645 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.645 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:46.645 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.645 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:46.903 09:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:47.834 00:11:47.834 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.834 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.834 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.834 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.834 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.834 09:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.834 09:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.092 09:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.092 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.092 { 00:11:48.092 "cntlid": 95, 00:11:48.092 "qid": 0, 00:11:48.092 "state": "enabled", 00:11:48.092 "thread": "nvmf_tgt_poll_group_000", 00:11:48.092 "listen_address": { 00:11:48.092 "trtype": "TCP", 00:11:48.092 "adrfam": "IPv4", 00:11:48.092 "traddr": "10.0.0.2", 00:11:48.092 "trsvcid": "4420" 00:11:48.092 }, 00:11:48.092 "peer_address": { 00:11:48.092 "trtype": "TCP", 00:11:48.092 "adrfam": "IPv4", 00:11:48.092 "traddr": "10.0.0.1", 00:11:48.092 "trsvcid": "43754" 00:11:48.092 }, 00:11:48.092 "auth": { 00:11:48.092 "state": "completed", 00:11:48.092 "digest": "sha384", 00:11:48.092 "dhgroup": "ffdhe8192" 00:11:48.092 } 00:11:48.092 } 00:11:48.092 ]' 00:11:48.092 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.092 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.092 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.092 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:48.093 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.093 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.093 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.093 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.350 09:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.313 09:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.905 00:11:49.905 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.905 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.905 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.193 { 00:11:50.193 "cntlid": 97, 00:11:50.193 "qid": 0, 00:11:50.193 "state": "enabled", 00:11:50.193 "thread": "nvmf_tgt_poll_group_000", 00:11:50.193 "listen_address": { 00:11:50.193 "trtype": "TCP", 00:11:50.193 "adrfam": "IPv4", 00:11:50.193 "traddr": "10.0.0.2", 00:11:50.193 "trsvcid": "4420" 00:11:50.193 }, 00:11:50.193 "peer_address": { 00:11:50.193 "trtype": "TCP", 00:11:50.193 "adrfam": "IPv4", 00:11:50.193 "traddr": "10.0.0.1", 00:11:50.193 "trsvcid": "43776" 00:11:50.193 }, 00:11:50.193 "auth": { 00:11:50.193 "state": "completed", 00:11:50.193 "digest": "sha512", 00:11:50.193 "dhgroup": "null" 00:11:50.193 } 00:11:50.193 } 00:11:50.193 ]' 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.193 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.450 09:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:51.382 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.640 09:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.897 00:11:51.897 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.897 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.897 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.154 { 00:11:52.154 "cntlid": 99, 00:11:52.154 "qid": 0, 00:11:52.154 "state": "enabled", 00:11:52.154 "thread": "nvmf_tgt_poll_group_000", 00:11:52.154 "listen_address": { 00:11:52.154 "trtype": "TCP", 00:11:52.154 "adrfam": "IPv4", 00:11:52.154 "traddr": "10.0.0.2", 00:11:52.154 "trsvcid": "4420" 00:11:52.154 }, 00:11:52.154 "peer_address": { 00:11:52.154 "trtype": "TCP", 00:11:52.154 "adrfam": "IPv4", 00:11:52.154 "traddr": "10.0.0.1", 00:11:52.154 "trsvcid": "43812" 00:11:52.154 }, 00:11:52.154 "auth": { 00:11:52.154 "state": "completed", 00:11:52.154 "digest": "sha512", 00:11:52.154 "dhgroup": "null" 00:11:52.154 } 00:11:52.154 } 00:11:52.154 ]' 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.154 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.411 09:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.369 09:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.936 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.936 { 00:11:53.936 "cntlid": 101, 00:11:53.936 "qid": 0, 00:11:53.936 "state": "enabled", 00:11:53.936 "thread": "nvmf_tgt_poll_group_000", 00:11:53.936 "listen_address": { 00:11:53.936 "trtype": "TCP", 00:11:53.936 "adrfam": "IPv4", 00:11:53.936 "traddr": "10.0.0.2", 00:11:53.936 "trsvcid": "4420" 00:11:53.936 }, 00:11:53.936 "peer_address": { 00:11:53.936 "trtype": "TCP", 00:11:53.936 "adrfam": "IPv4", 00:11:53.936 "traddr": "10.0.0.1", 00:11:53.936 "trsvcid": "43832" 00:11:53.936 }, 00:11:53.936 "auth": { 00:11:53.936 "state": "completed", 00:11:53.936 "digest": "sha512", 00:11:53.936 "dhgroup": "null" 00:11:53.936 } 00:11:53.936 } 00:11:53.936 ]' 00:11:53.936 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.194 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.194 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.194 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:54.194 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.194 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.194 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.194 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.452 09:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.027 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:55.285 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:55.542 00:11:55.542 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.542 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.543 09:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.110 { 00:11:56.110 "cntlid": 103, 00:11:56.110 "qid": 0, 00:11:56.110 "state": "enabled", 00:11:56.110 "thread": "nvmf_tgt_poll_group_000", 00:11:56.110 "listen_address": { 00:11:56.110 "trtype": "TCP", 00:11:56.110 "adrfam": "IPv4", 00:11:56.110 "traddr": "10.0.0.2", 00:11:56.110 "trsvcid": "4420" 00:11:56.110 }, 00:11:56.110 "peer_address": { 00:11:56.110 "trtype": "TCP", 00:11:56.110 "adrfam": "IPv4", 00:11:56.110 "traddr": "10.0.0.1", 00:11:56.110 "trsvcid": "33568" 00:11:56.110 }, 00:11:56.110 "auth": { 00:11:56.110 "state": "completed", 00:11:56.110 "digest": "sha512", 00:11:56.110 "dhgroup": "null" 00:11:56.110 } 00:11:56.110 } 00:11:56.110 ]' 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.110 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.369 09:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.305 09:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.562 00:11:57.563 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.563 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.563 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.128 { 00:11:58.128 "cntlid": 105, 00:11:58.128 "qid": 0, 00:11:58.128 "state": "enabled", 00:11:58.128 "thread": "nvmf_tgt_poll_group_000", 00:11:58.128 "listen_address": { 00:11:58.128 "trtype": "TCP", 00:11:58.128 "adrfam": "IPv4", 00:11:58.128 "traddr": "10.0.0.2", 00:11:58.128 "trsvcid": "4420" 00:11:58.128 }, 00:11:58.128 "peer_address": { 00:11:58.128 "trtype": "TCP", 00:11:58.128 "adrfam": "IPv4", 00:11:58.128 "traddr": "10.0.0.1", 00:11:58.128 "trsvcid": "33604" 00:11:58.128 }, 00:11:58.128 "auth": { 00:11:58.128 "state": "completed", 00:11:58.128 "digest": "sha512", 00:11:58.128 "dhgroup": "ffdhe2048" 00:11:58.128 } 00:11:58.128 } 00:11:58.128 ]' 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.128 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.387 09:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.319 09:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.602 00:11:59.862 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.862 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.862 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.862 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.862 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.862 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.862 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.119 { 00:12:00.119 "cntlid": 107, 00:12:00.119 "qid": 0, 00:12:00.119 "state": "enabled", 00:12:00.119 "thread": "nvmf_tgt_poll_group_000", 00:12:00.119 "listen_address": { 00:12:00.119 "trtype": "TCP", 00:12:00.119 "adrfam": "IPv4", 00:12:00.119 "traddr": "10.0.0.2", 00:12:00.119 "trsvcid": "4420" 00:12:00.119 }, 00:12:00.119 "peer_address": { 00:12:00.119 "trtype": "TCP", 00:12:00.119 "adrfam": "IPv4", 00:12:00.119 "traddr": "10.0.0.1", 00:12:00.119 "trsvcid": "33628" 00:12:00.119 }, 00:12:00.119 "auth": { 00:12:00.119 "state": "completed", 00:12:00.119 "digest": "sha512", 00:12:00.119 "dhgroup": "ffdhe2048" 00:12:00.119 } 00:12:00.119 } 00:12:00.119 ]' 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.119 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.120 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.378 09:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.310 09:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.568 00:12:01.825 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.825 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.825 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.825 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.825 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.825 09:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.825 09:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.083 { 00:12:02.083 "cntlid": 109, 00:12:02.083 "qid": 0, 00:12:02.083 "state": "enabled", 00:12:02.083 "thread": "nvmf_tgt_poll_group_000", 00:12:02.083 "listen_address": { 00:12:02.083 "trtype": "TCP", 00:12:02.083 "adrfam": "IPv4", 00:12:02.083 "traddr": "10.0.0.2", 00:12:02.083 "trsvcid": "4420" 00:12:02.083 }, 00:12:02.083 "peer_address": { 00:12:02.083 "trtype": "TCP", 00:12:02.083 "adrfam": "IPv4", 00:12:02.083 "traddr": "10.0.0.1", 00:12:02.083 "trsvcid": "33654" 00:12:02.083 }, 00:12:02.083 "auth": { 00:12:02.083 "state": "completed", 00:12:02.083 "digest": "sha512", 00:12:02.083 "dhgroup": "ffdhe2048" 00:12:02.083 } 00:12:02.083 } 00:12:02.083 ]' 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.083 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.341 09:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:03.274 09:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:03.857 00:12:03.857 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.857 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.857 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.119 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.119 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.119 09:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.119 09:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.119 09:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.119 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.119 { 00:12:04.119 "cntlid": 111, 00:12:04.119 "qid": 0, 00:12:04.119 "state": "enabled", 00:12:04.119 "thread": "nvmf_tgt_poll_group_000", 00:12:04.119 "listen_address": { 00:12:04.119 "trtype": "TCP", 00:12:04.119 "adrfam": "IPv4", 00:12:04.120 "traddr": "10.0.0.2", 00:12:04.120 "trsvcid": "4420" 00:12:04.120 }, 00:12:04.120 "peer_address": { 00:12:04.120 "trtype": "TCP", 00:12:04.120 "adrfam": "IPv4", 00:12:04.120 "traddr": "10.0.0.1", 00:12:04.120 "trsvcid": "33680" 00:12:04.120 }, 00:12:04.120 "auth": { 00:12:04.120 "state": "completed", 00:12:04.120 "digest": "sha512", 00:12:04.120 "dhgroup": "ffdhe2048" 00:12:04.120 } 00:12:04.120 } 00:12:04.120 ]' 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.120 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.377 09:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.311 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.570 09:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.829 00:12:05.829 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.829 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.829 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.087 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.087 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.087 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.087 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.087 09:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.087 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.087 { 00:12:06.087 "cntlid": 113, 00:12:06.087 "qid": 0, 00:12:06.087 "state": "enabled", 00:12:06.087 "thread": "nvmf_tgt_poll_group_000", 00:12:06.087 "listen_address": { 00:12:06.087 "trtype": "TCP", 00:12:06.087 "adrfam": "IPv4", 00:12:06.087 "traddr": "10.0.0.2", 00:12:06.087 "trsvcid": "4420" 00:12:06.087 }, 00:12:06.087 "peer_address": { 00:12:06.087 "trtype": "TCP", 00:12:06.087 "adrfam": "IPv4", 00:12:06.087 "traddr": "10.0.0.1", 00:12:06.087 "trsvcid": "56434" 00:12:06.088 }, 00:12:06.088 "auth": { 00:12:06.088 "state": "completed", 00:12:06.088 "digest": "sha512", 00:12:06.088 "dhgroup": "ffdhe3072" 00:12:06.088 } 00:12:06.088 } 00:12:06.088 ]' 00:12:06.088 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.088 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.088 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.346 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.346 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.346 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.346 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.346 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.605 09:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.538 09:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.102 00:12:08.102 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.102 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.102 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.360 { 00:12:08.360 "cntlid": 115, 00:12:08.360 "qid": 0, 00:12:08.360 "state": "enabled", 00:12:08.360 "thread": "nvmf_tgt_poll_group_000", 00:12:08.360 "listen_address": { 00:12:08.360 "trtype": "TCP", 00:12:08.360 "adrfam": "IPv4", 00:12:08.360 "traddr": "10.0.0.2", 00:12:08.360 "trsvcid": "4420" 00:12:08.360 }, 00:12:08.360 "peer_address": { 00:12:08.360 "trtype": "TCP", 00:12:08.360 "adrfam": "IPv4", 00:12:08.360 "traddr": "10.0.0.1", 00:12:08.360 "trsvcid": "56462" 00:12:08.360 }, 00:12:08.360 "auth": { 00:12:08.360 "state": "completed", 00:12:08.360 "digest": "sha512", 00:12:08.360 "dhgroup": "ffdhe3072" 00:12:08.360 } 00:12:08.360 } 00:12:08.360 ]' 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.360 09:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.930 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:09.500 09:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.759 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.017 00:12:10.017 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.017 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.017 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.274 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.274 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.274 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.274 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.532 { 00:12:10.532 "cntlid": 117, 00:12:10.532 "qid": 0, 00:12:10.532 "state": "enabled", 00:12:10.532 "thread": "nvmf_tgt_poll_group_000", 00:12:10.532 "listen_address": { 00:12:10.532 "trtype": "TCP", 00:12:10.532 "adrfam": "IPv4", 00:12:10.532 "traddr": "10.0.0.2", 00:12:10.532 "trsvcid": "4420" 00:12:10.532 }, 00:12:10.532 "peer_address": { 00:12:10.532 "trtype": "TCP", 00:12:10.532 "adrfam": "IPv4", 00:12:10.532 "traddr": "10.0.0.1", 00:12:10.532 "trsvcid": "56480" 00:12:10.532 }, 00:12:10.532 "auth": { 00:12:10.532 "state": "completed", 00:12:10.532 "digest": "sha512", 00:12:10.532 "dhgroup": "ffdhe3072" 00:12:10.532 } 00:12:10.532 } 00:12:10.532 ]' 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.532 09:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.790 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.723 09:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:11.723 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:12.289 00:12:12.289 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.289 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.289 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.547 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.547 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.547 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.547 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.547 09:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.547 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.547 { 00:12:12.547 "cntlid": 119, 00:12:12.548 "qid": 0, 00:12:12.548 "state": "enabled", 00:12:12.548 "thread": "nvmf_tgt_poll_group_000", 00:12:12.548 "listen_address": { 00:12:12.548 "trtype": "TCP", 00:12:12.548 "adrfam": "IPv4", 00:12:12.548 "traddr": "10.0.0.2", 00:12:12.548 "trsvcid": "4420" 00:12:12.548 }, 00:12:12.548 "peer_address": { 00:12:12.548 "trtype": "TCP", 00:12:12.548 "adrfam": "IPv4", 00:12:12.548 "traddr": "10.0.0.1", 00:12:12.548 "trsvcid": "56504" 00:12:12.548 }, 00:12:12.548 "auth": { 00:12:12.548 "state": "completed", 00:12:12.548 "digest": "sha512", 00:12:12.548 "dhgroup": "ffdhe3072" 00:12:12.548 } 00:12:12.548 } 00:12:12.548 ]' 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.548 09:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.806 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.745 09:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.745 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.411 00:12:14.411 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.411 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.411 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.669 { 00:12:14.669 "cntlid": 121, 00:12:14.669 "qid": 0, 00:12:14.669 "state": "enabled", 00:12:14.669 "thread": "nvmf_tgt_poll_group_000", 00:12:14.669 "listen_address": { 00:12:14.669 "trtype": "TCP", 00:12:14.669 "adrfam": "IPv4", 00:12:14.669 "traddr": "10.0.0.2", 00:12:14.669 "trsvcid": "4420" 00:12:14.669 }, 00:12:14.669 "peer_address": { 00:12:14.669 "trtype": "TCP", 00:12:14.669 "adrfam": "IPv4", 00:12:14.669 "traddr": "10.0.0.1", 00:12:14.669 "trsvcid": "56518" 00:12:14.669 }, 00:12:14.669 "auth": { 00:12:14.669 "state": "completed", 00:12:14.669 "digest": "sha512", 00:12:14.669 "dhgroup": "ffdhe4096" 00:12:14.669 } 00:12:14.669 } 00:12:14.669 ]' 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.669 09:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.669 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.669 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.669 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.669 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.669 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.927 09:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.860 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.425 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.425 { 00:12:16.425 "cntlid": 123, 00:12:16.425 "qid": 0, 00:12:16.425 "state": "enabled", 00:12:16.425 "thread": "nvmf_tgt_poll_group_000", 00:12:16.425 "listen_address": { 00:12:16.425 "trtype": "TCP", 00:12:16.425 "adrfam": "IPv4", 00:12:16.425 "traddr": "10.0.0.2", 00:12:16.425 "trsvcid": "4420" 00:12:16.425 }, 00:12:16.425 "peer_address": { 00:12:16.425 "trtype": "TCP", 00:12:16.425 "adrfam": "IPv4", 00:12:16.425 "traddr": "10.0.0.1", 00:12:16.425 "trsvcid": "50118" 00:12:16.425 }, 00:12:16.425 "auth": { 00:12:16.425 "state": "completed", 00:12:16.425 "digest": "sha512", 00:12:16.425 "dhgroup": "ffdhe4096" 00:12:16.425 } 00:12:16.425 } 00:12:16.425 ]' 00:12:16.425 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.682 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.682 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.682 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.682 09:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.682 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.682 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.682 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.939 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.505 09:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.762 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.327 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.327 { 00:12:18.327 "cntlid": 125, 00:12:18.327 "qid": 0, 00:12:18.327 "state": "enabled", 00:12:18.327 "thread": "nvmf_tgt_poll_group_000", 00:12:18.327 "listen_address": { 00:12:18.327 "trtype": "TCP", 00:12:18.327 "adrfam": "IPv4", 00:12:18.327 "traddr": "10.0.0.2", 00:12:18.327 "trsvcid": "4420" 00:12:18.327 }, 00:12:18.327 "peer_address": { 00:12:18.327 "trtype": "TCP", 00:12:18.327 "adrfam": "IPv4", 00:12:18.327 "traddr": "10.0.0.1", 00:12:18.327 "trsvcid": "50128" 00:12:18.327 }, 00:12:18.327 "auth": { 00:12:18.327 "state": "completed", 00:12:18.327 "digest": "sha512", 00:12:18.327 "dhgroup": "ffdhe4096" 00:12:18.327 } 00:12:18.327 } 00:12:18.327 ]' 00:12:18.327 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.584 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.584 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.584 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.584 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.584 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.584 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.584 09:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.865 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.431 09:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.690 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:20.261 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.261 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.261 { 00:12:20.261 "cntlid": 127, 00:12:20.261 "qid": 0, 00:12:20.261 "state": "enabled", 00:12:20.261 "thread": "nvmf_tgt_poll_group_000", 00:12:20.261 "listen_address": { 00:12:20.261 "trtype": "TCP", 00:12:20.261 "adrfam": "IPv4", 00:12:20.261 "traddr": "10.0.0.2", 00:12:20.261 "trsvcid": "4420" 00:12:20.261 }, 00:12:20.261 "peer_address": { 00:12:20.261 "trtype": "TCP", 00:12:20.261 "adrfam": "IPv4", 00:12:20.261 "traddr": "10.0.0.1", 00:12:20.261 "trsvcid": "50152" 00:12:20.261 }, 00:12:20.261 "auth": { 00:12:20.261 "state": "completed", 00:12:20.261 "digest": "sha512", 00:12:20.261 "dhgroup": "ffdhe4096" 00:12:20.261 } 00:12:20.261 } 00:12:20.262 ]' 00:12:20.262 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.520 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.520 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.520 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.520 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.520 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.520 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.520 09:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.778 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:21.343 09:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.909 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.166 00:12:22.166 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.166 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.166 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.423 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.423 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.423 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.423 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.423 09:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.423 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.423 { 00:12:22.423 "cntlid": 129, 00:12:22.423 "qid": 0, 00:12:22.423 "state": "enabled", 00:12:22.423 "thread": "nvmf_tgt_poll_group_000", 00:12:22.423 "listen_address": { 00:12:22.423 "trtype": "TCP", 00:12:22.423 "adrfam": "IPv4", 00:12:22.423 "traddr": "10.0.0.2", 00:12:22.423 "trsvcid": "4420" 00:12:22.423 }, 00:12:22.423 "peer_address": { 00:12:22.423 "trtype": "TCP", 00:12:22.423 "adrfam": "IPv4", 00:12:22.423 "traddr": "10.0.0.1", 00:12:22.423 "trsvcid": "50182" 00:12:22.423 }, 00:12:22.423 "auth": { 00:12:22.423 "state": "completed", 00:12:22.423 "digest": "sha512", 00:12:22.423 "dhgroup": "ffdhe6144" 00:12:22.423 } 00:12:22.423 } 00:12:22.423 ]' 00:12:22.423 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.681 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.681 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.681 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:22.681 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.681 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.681 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.681 09:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.938 09:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.937 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.502 00:12:24.502 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.502 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.502 09:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.759 { 00:12:24.759 "cntlid": 131, 00:12:24.759 "qid": 0, 00:12:24.759 "state": "enabled", 00:12:24.759 "thread": "nvmf_tgt_poll_group_000", 00:12:24.759 "listen_address": { 00:12:24.759 "trtype": "TCP", 00:12:24.759 "adrfam": "IPv4", 00:12:24.759 "traddr": "10.0.0.2", 00:12:24.759 "trsvcid": "4420" 00:12:24.759 }, 00:12:24.759 "peer_address": { 00:12:24.759 "trtype": "TCP", 00:12:24.759 "adrfam": "IPv4", 00:12:24.759 "traddr": "10.0.0.1", 00:12:24.759 "trsvcid": "50210" 00:12:24.759 }, 00:12:24.759 "auth": { 00:12:24.759 "state": "completed", 00:12:24.759 "digest": "sha512", 00:12:24.759 "dhgroup": "ffdhe6144" 00:12:24.759 } 00:12:24.759 } 00:12:24.759 ]' 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.759 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.017 09:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.951 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.516 00:12:26.516 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.516 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.516 09:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.773 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.774 { 00:12:26.774 "cntlid": 133, 00:12:26.774 "qid": 0, 00:12:26.774 "state": "enabled", 00:12:26.774 "thread": "nvmf_tgt_poll_group_000", 00:12:26.774 "listen_address": { 00:12:26.774 "trtype": "TCP", 00:12:26.774 "adrfam": "IPv4", 00:12:26.774 "traddr": "10.0.0.2", 00:12:26.774 "trsvcid": "4420" 00:12:26.774 }, 00:12:26.774 "peer_address": { 00:12:26.774 "trtype": "TCP", 00:12:26.774 "adrfam": "IPv4", 00:12:26.774 "traddr": "10.0.0.1", 00:12:26.774 "trsvcid": "50252" 00:12:26.774 }, 00:12:26.774 "auth": { 00:12:26.774 "state": "completed", 00:12:26.774 "digest": "sha512", 00:12:26.774 "dhgroup": "ffdhe6144" 00:12:26.774 } 00:12:26.774 } 00:12:26.774 ]' 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.774 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.031 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.031 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.031 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.289 09:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:27.855 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.113 09:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.692 00:12:28.692 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.692 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.692 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.950 { 00:12:28.950 "cntlid": 135, 00:12:28.950 "qid": 0, 00:12:28.950 "state": "enabled", 00:12:28.950 "thread": "nvmf_tgt_poll_group_000", 00:12:28.950 "listen_address": { 00:12:28.950 "trtype": "TCP", 00:12:28.950 "adrfam": "IPv4", 00:12:28.950 "traddr": "10.0.0.2", 00:12:28.950 "trsvcid": "4420" 00:12:28.950 }, 00:12:28.950 "peer_address": { 00:12:28.950 "trtype": "TCP", 00:12:28.950 "adrfam": "IPv4", 00:12:28.950 "traddr": "10.0.0.1", 00:12:28.950 "trsvcid": "50290" 00:12:28.950 }, 00:12:28.950 "auth": { 00:12:28.950 "state": "completed", 00:12:28.950 "digest": "sha512", 00:12:28.950 "dhgroup": "ffdhe6144" 00:12:28.950 } 00:12:28.950 } 00:12:28.950 ]' 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:28.950 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.209 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.209 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.209 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.466 09:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:30.032 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.291 09:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.857 00:12:30.857 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.857 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.857 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.115 { 00:12:31.115 "cntlid": 137, 00:12:31.115 "qid": 0, 00:12:31.115 "state": "enabled", 00:12:31.115 "thread": "nvmf_tgt_poll_group_000", 00:12:31.115 "listen_address": { 00:12:31.115 "trtype": "TCP", 00:12:31.115 "adrfam": "IPv4", 00:12:31.115 "traddr": "10.0.0.2", 00:12:31.115 "trsvcid": "4420" 00:12:31.115 }, 00:12:31.115 "peer_address": { 00:12:31.115 "trtype": "TCP", 00:12:31.115 "adrfam": "IPv4", 00:12:31.115 "traddr": "10.0.0.1", 00:12:31.115 "trsvcid": "50314" 00:12:31.115 }, 00:12:31.115 "auth": { 00:12:31.115 "state": "completed", 00:12:31.115 "digest": "sha512", 00:12:31.115 "dhgroup": "ffdhe8192" 00:12:31.115 } 00:12:31.115 } 00:12:31.115 ]' 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.115 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.373 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.373 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.373 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.631 09:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.197 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.455 09:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.020 00:12:33.020 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.020 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.020 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.276 { 00:12:33.276 "cntlid": 139, 00:12:33.276 "qid": 0, 00:12:33.276 "state": "enabled", 00:12:33.276 "thread": "nvmf_tgt_poll_group_000", 00:12:33.276 "listen_address": { 00:12:33.276 "trtype": "TCP", 00:12:33.276 "adrfam": "IPv4", 00:12:33.276 "traddr": "10.0.0.2", 00:12:33.276 "trsvcid": "4420" 00:12:33.276 }, 00:12:33.276 "peer_address": { 00:12:33.276 "trtype": "TCP", 00:12:33.276 "adrfam": "IPv4", 00:12:33.276 "traddr": "10.0.0.1", 00:12:33.276 "trsvcid": "50334" 00:12:33.276 }, 00:12:33.276 "auth": { 00:12:33.276 "state": "completed", 00:12:33.276 "digest": "sha512", 00:12:33.276 "dhgroup": "ffdhe8192" 00:12:33.276 } 00:12:33.276 } 00:12:33.276 ]' 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.276 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.543 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:33.543 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.543 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.543 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.543 09:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.807 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:01:YjhhOGFjNmUzOGE3Mjg4ODI5YWZkYjZjNmYyZjhlZWKJHRKa: --dhchap-ctrl-secret DHHC-1:02:YWU1MTlkOWQxZDU1YWJmZDEyNzdiMjBlZTE1Nzk1OGFmOTJhZTQ2YmZmOGY0ODVkP+p3kg==: 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.372 09:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.938 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.503 00:12:35.503 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.503 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.503 09:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.761 { 00:12:35.761 "cntlid": 141, 00:12:35.761 "qid": 0, 00:12:35.761 "state": "enabled", 00:12:35.761 "thread": "nvmf_tgt_poll_group_000", 00:12:35.761 "listen_address": { 00:12:35.761 "trtype": "TCP", 00:12:35.761 "adrfam": "IPv4", 00:12:35.761 "traddr": "10.0.0.2", 00:12:35.761 "trsvcid": "4420" 00:12:35.761 }, 00:12:35.761 "peer_address": { 00:12:35.761 "trtype": "TCP", 00:12:35.761 "adrfam": "IPv4", 00:12:35.761 "traddr": "10.0.0.1", 00:12:35.761 "trsvcid": "52152" 00:12:35.761 }, 00:12:35.761 "auth": { 00:12:35.761 "state": "completed", 00:12:35.761 "digest": "sha512", 00:12:35.761 "dhgroup": "ffdhe8192" 00:12:35.761 } 00:12:35.761 } 00:12:35.761 ]' 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.761 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.344 09:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:02:NTM0ZDJmNWUwODdkMWJiMTA0NTAxYWQ1YWZmMTQzYTRhOWY3ZjZjZjRiOTUxMzg1iPtSjA==: --dhchap-ctrl-secret DHHC-1:01:ODRjYzk2NWFiYzgyMThiNjI2MmY5YTdiYTM1ZTY2NGX0lMru: 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:36.909 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.166 09:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:37.790 00:12:37.790 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.790 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.790 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.049 { 00:12:38.049 "cntlid": 143, 00:12:38.049 "qid": 0, 00:12:38.049 "state": "enabled", 00:12:38.049 "thread": "nvmf_tgt_poll_group_000", 00:12:38.049 "listen_address": { 00:12:38.049 "trtype": "TCP", 00:12:38.049 "adrfam": "IPv4", 00:12:38.049 "traddr": "10.0.0.2", 00:12:38.049 "trsvcid": "4420" 00:12:38.049 }, 00:12:38.049 "peer_address": { 00:12:38.049 "trtype": "TCP", 00:12:38.049 "adrfam": "IPv4", 00:12:38.049 "traddr": "10.0.0.1", 00:12:38.049 "trsvcid": "52190" 00:12:38.049 }, 00:12:38.049 "auth": { 00:12:38.049 "state": "completed", 00:12:38.049 "digest": "sha512", 00:12:38.049 "dhgroup": "ffdhe8192" 00:12:38.049 } 00:12:38.049 } 00:12:38.049 ]' 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.049 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.614 09:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.182 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.440 09:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.007 00:12:40.007 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.007 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.007 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.265 { 00:12:40.265 "cntlid": 145, 00:12:40.265 "qid": 0, 00:12:40.265 "state": "enabled", 00:12:40.265 "thread": "nvmf_tgt_poll_group_000", 00:12:40.265 "listen_address": { 00:12:40.265 "trtype": "TCP", 00:12:40.265 "adrfam": "IPv4", 00:12:40.265 "traddr": "10.0.0.2", 00:12:40.265 "trsvcid": "4420" 00:12:40.265 }, 00:12:40.265 "peer_address": { 00:12:40.265 "trtype": "TCP", 00:12:40.265 "adrfam": "IPv4", 00:12:40.265 "traddr": "10.0.0.1", 00:12:40.265 "trsvcid": "52224" 00:12:40.265 }, 00:12:40.265 "auth": { 00:12:40.265 "state": "completed", 00:12:40.265 "digest": "sha512", 00:12:40.265 "dhgroup": "ffdhe8192" 00:12:40.265 } 00:12:40.265 } 00:12:40.265 ]' 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.265 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.523 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.523 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.523 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.523 09:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:00:ODI2NzY1ZDUwMWUxZDlmMzMwMjMwOGRkOGU2MjkxZDYyMDQ5MzRlYzAxYTg1Nzg1eK0wRQ==: --dhchap-ctrl-secret DHHC-1:03:NjFmNDJkNGNhOWIwMDNiYjgzY2RhZmIyYjlhMjk0NDBjMzUzZTBjMTkyZjU5MTA1MDAwNDkzZmFiYzFmZDhlMIwlXRE=: 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:41.465 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.466 09:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:41.466 09:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:42.049 request: 00:12:42.049 { 00:12:42.049 "name": "nvme0", 00:12:42.049 "trtype": "tcp", 00:12:42.049 "traddr": "10.0.0.2", 00:12:42.049 "adrfam": "ipv4", 00:12:42.049 "trsvcid": "4420", 00:12:42.049 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da", 00:12:42.049 "prchk_reftag": false, 00:12:42.049 "prchk_guard": false, 00:12:42.049 "hdgst": false, 00:12:42.049 "ddgst": false, 00:12:42.049 "dhchap_key": "key2", 00:12:42.049 "method": "bdev_nvme_attach_controller", 00:12:42.049 "req_id": 1 00:12:42.049 } 00:12:42.049 Got JSON-RPC error response 00:12:42.049 response: 00:12:42.049 { 00:12:42.049 "code": -5, 00:12:42.049 "message": "Input/output error" 00:12:42.049 } 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.049 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.616 request: 00:12:42.616 { 00:12:42.616 "name": "nvme0", 00:12:42.616 "trtype": "tcp", 00:12:42.616 "traddr": "10.0.0.2", 00:12:42.616 "adrfam": "ipv4", 00:12:42.616 "trsvcid": "4420", 00:12:42.616 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da", 00:12:42.616 "prchk_reftag": false, 00:12:42.616 "prchk_guard": false, 00:12:42.616 "hdgst": false, 00:12:42.616 "ddgst": false, 00:12:42.616 "dhchap_key": "key1", 00:12:42.616 "dhchap_ctrlr_key": "ckey2", 00:12:42.616 "method": "bdev_nvme_attach_controller", 00:12:42.616 "req_id": 1 00:12:42.616 } 00:12:42.616 Got JSON-RPC error response 00:12:42.616 response: 00:12:42.616 { 00:12:42.616 "code": -5, 00:12:42.616 "message": "Input/output error" 00:12:42.616 } 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key1 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.616 09:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.182 request: 00:12:43.182 { 00:12:43.182 "name": "nvme0", 00:12:43.182 "trtype": "tcp", 00:12:43.182 "traddr": "10.0.0.2", 00:12:43.182 "adrfam": "ipv4", 00:12:43.182 "trsvcid": "4420", 00:12:43.182 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:43.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da", 00:12:43.182 "prchk_reftag": false, 00:12:43.182 "prchk_guard": false, 00:12:43.182 "hdgst": false, 00:12:43.182 "ddgst": false, 00:12:43.182 "dhchap_key": "key1", 00:12:43.182 "dhchap_ctrlr_key": "ckey1", 00:12:43.182 "method": "bdev_nvme_attach_controller", 00:12:43.182 "req_id": 1 00:12:43.182 } 00:12:43.182 Got JSON-RPC error response 00:12:43.182 response: 00:12:43.182 { 00:12:43.182 "code": -5, 00:12:43.182 "message": "Input/output error" 00:12:43.182 } 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69492 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69492 ']' 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69492 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69492 00:12:43.182 killing process with pid 69492 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69492' 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69492 00:12:43.182 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69492 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72558 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72558 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72558 ']' 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.440 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.441 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.441 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.441 09:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72558 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72558 ']' 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.376 09:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.711 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:44.712 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:44.712 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:44.712 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.712 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:12:44.712 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.712 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.970 09:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.970 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.970 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.536 00:12:45.536 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.536 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.536 09:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.794 { 00:12:45.794 "cntlid": 1, 00:12:45.794 "qid": 0, 00:12:45.794 "state": "enabled", 00:12:45.794 "thread": "nvmf_tgt_poll_group_000", 00:12:45.794 "listen_address": { 00:12:45.794 "trtype": "TCP", 00:12:45.794 "adrfam": "IPv4", 00:12:45.794 "traddr": "10.0.0.2", 00:12:45.794 "trsvcid": "4420" 00:12:45.794 }, 00:12:45.794 "peer_address": { 00:12:45.794 "trtype": "TCP", 00:12:45.794 "adrfam": "IPv4", 00:12:45.794 "traddr": "10.0.0.1", 00:12:45.794 "trsvcid": "50500" 00:12:45.794 }, 00:12:45.794 "auth": { 00:12:45.794 "state": "completed", 00:12:45.794 "digest": "sha512", 00:12:45.794 "dhgroup": "ffdhe8192" 00:12:45.794 } 00:12:45.794 } 00:12:45.794 ]' 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.794 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.052 09:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid d2f81337-7559-423d-93ce-5836d202b6da --dhchap-secret DHHC-1:03:ZDdjM2Q3NDNlODE4OGQwOTQ2NzJlMDRiZDZmODNiYWU5NzAxYWYxYjJiZjNjYmQ1YWU1N2JjYmM0YjU2ZTJiZQnW+PQ=: 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --dhchap-key key3 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:46.987 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.244 request: 00:12:47.244 { 00:12:47.244 "name": "nvme0", 00:12:47.244 "trtype": "tcp", 00:12:47.244 "traddr": "10.0.0.2", 00:12:47.244 "adrfam": "ipv4", 00:12:47.244 "trsvcid": "4420", 00:12:47.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:47.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da", 00:12:47.244 "prchk_reftag": false, 00:12:47.244 "prchk_guard": false, 00:12:47.244 "hdgst": false, 00:12:47.244 "ddgst": false, 00:12:47.244 "dhchap_key": "key3", 00:12:47.244 "method": "bdev_nvme_attach_controller", 00:12:47.244 "req_id": 1 00:12:47.244 } 00:12:47.244 Got JSON-RPC error response 00:12:47.244 response: 00:12:47.244 { 00:12:47.244 "code": -5, 00:12:47.244 "message": "Input/output error" 00:12:47.244 } 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:47.244 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.502 09:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.760 request: 00:12:47.760 { 00:12:47.760 "name": "nvme0", 00:12:47.760 "trtype": "tcp", 00:12:47.760 "traddr": "10.0.0.2", 00:12:47.760 "adrfam": "ipv4", 00:12:47.760 "trsvcid": "4420", 00:12:47.760 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:47.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da", 00:12:47.760 "prchk_reftag": false, 00:12:47.760 "prchk_guard": false, 00:12:47.760 "hdgst": false, 00:12:47.760 "ddgst": false, 00:12:47.760 "dhchap_key": "key3", 00:12:47.760 "method": "bdev_nvme_attach_controller", 00:12:47.760 "req_id": 1 00:12:47.760 } 00:12:47.760 Got JSON-RPC error response 00:12:47.760 response: 00:12:47.761 { 00:12:47.761 "code": -5, 00:12:47.761 "message": "Input/output error" 00:12:47.761 } 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:47.761 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.327 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.586 request: 00:12:48.586 { 00:12:48.586 "name": "nvme0", 00:12:48.586 "trtype": "tcp", 00:12:48.586 "traddr": "10.0.0.2", 00:12:48.586 "adrfam": "ipv4", 00:12:48.586 "trsvcid": "4420", 00:12:48.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:48.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da", 00:12:48.586 "prchk_reftag": false, 00:12:48.586 "prchk_guard": false, 00:12:48.586 "hdgst": false, 00:12:48.586 "ddgst": false, 00:12:48.586 "dhchap_key": "key0", 00:12:48.586 "dhchap_ctrlr_key": "key1", 00:12:48.586 "method": "bdev_nvme_attach_controller", 00:12:48.586 "req_id": 1 00:12:48.586 } 00:12:48.586 Got JSON-RPC error response 00:12:48.586 response: 00:12:48.586 { 00:12:48.586 "code": -5, 00:12:48.586 "message": "Input/output error" 00:12:48.586 } 00:12:48.586 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:48.586 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:48.586 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:48.586 09:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:48.586 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:48.586 09:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:12:48.845 00:12:48.845 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:12:48.845 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:12:48.845 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.104 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.104 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.104 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69524 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69524 ']' 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69524 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69524 00:12:49.362 killing process with pid 69524 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69524' 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69524 00:12:49.362 09:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69524 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.930 rmmod nvme_tcp 00:12:49.930 rmmod nvme_fabrics 00:12:49.930 rmmod nvme_keyring 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72558 ']' 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72558 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72558 ']' 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72558 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72558 00:12:49.930 killing process with pid 72558 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72558' 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72558 00:12:49.930 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72558 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.F3H /tmp/spdk.key-sha256.M3h /tmp/spdk.key-sha384.WoB /tmp/spdk.key-sha512.2HK /tmp/spdk.key-sha512.sCb /tmp/spdk.key-sha384.mcK /tmp/spdk.key-sha256.iqj '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:50.193 00:12:50.193 real 2m52.440s 00:12:50.193 user 6m53.283s 00:12:50.193 sys 0m26.662s 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:50.193 09:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.193 ************************************ 00:12:50.193 END TEST nvmf_auth_target 00:12:50.193 ************************************ 00:12:50.193 09:38:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:50.193 09:38:44 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:12:50.193 09:38:44 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:50.193 09:38:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:50.193 09:38:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.193 09:38:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:50.193 ************************************ 00:12:50.193 START TEST nvmf_bdevio_no_huge 00:12:50.193 ************************************ 00:12:50.193 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:50.452 * Looking for test storage... 00:12:50.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.452 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:50.453 Cannot find device "nvmf_tgt_br" 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.453 Cannot find device "nvmf_tgt_br2" 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:50.453 Cannot find device "nvmf_tgt_br" 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:50.453 Cannot find device "nvmf_tgt_br2" 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:50.453 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:50.712 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:50.712 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.713 09:38:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:50.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:12:50.713 00:12:50.713 --- 10.0.0.2 ping statistics --- 00:12:50.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.713 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:50.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:12:50.713 00:12:50.713 --- 10.0.0.3 ping statistics --- 00:12:50.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.713 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:12:50.713 00:12:50.713 --- 10.0.0.1 ping statistics --- 00:12:50.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.713 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72871 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72871 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72871 ']' 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.713 09:38:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:50.713 [2024-07-15 09:38:45.110018] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:50.713 [2024-07-15 09:38:45.110117] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:50.972 [2024-07-15 09:38:45.261742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.972 [2024-07-15 09:38:45.423687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.972 [2024-07-15 09:38:45.423774] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.972 [2024-07-15 09:38:45.423789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.972 [2024-07-15 09:38:45.423800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.972 [2024-07-15 09:38:45.423809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.972 [2024-07-15 09:38:45.423937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:50.972 [2024-07-15 09:38:45.424225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:50.972 [2024-07-15 09:38:45.424590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:50.972 [2024-07-15 09:38:45.424596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.972 [2024-07-15 09:38:45.430396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:51.906 [2024-07-15 09:38:46.168946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:51.906 Malloc0 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:51.906 [2024-07-15 09:38:46.213129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.906 { 00:12:51.906 "params": { 00:12:51.906 "name": "Nvme$subsystem", 00:12:51.906 "trtype": "$TEST_TRANSPORT", 00:12:51.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.906 "adrfam": "ipv4", 00:12:51.906 "trsvcid": "$NVMF_PORT", 00:12:51.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.906 "hdgst": ${hdgst:-false}, 00:12:51.906 "ddgst": ${ddgst:-false} 00:12:51.906 }, 00:12:51.906 "method": "bdev_nvme_attach_controller" 00:12:51.906 } 00:12:51.906 EOF 00:12:51.906 )") 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:12:51.906 09:38:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.906 "params": { 00:12:51.906 "name": "Nvme1", 00:12:51.906 "trtype": "tcp", 00:12:51.906 "traddr": "10.0.0.2", 00:12:51.906 "adrfam": "ipv4", 00:12:51.906 "trsvcid": "4420", 00:12:51.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.906 "hdgst": false, 00:12:51.906 "ddgst": false 00:12:51.906 }, 00:12:51.906 "method": "bdev_nvme_attach_controller" 00:12:51.906 }' 00:12:51.906 [2024-07-15 09:38:46.264946] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:51.906 [2024-07-15 09:38:46.265044] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72914 ] 00:12:52.164 [2024-07-15 09:38:46.403132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.164 [2024-07-15 09:38:46.552311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.164 [2024-07-15 09:38:46.552429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.164 [2024-07-15 09:38:46.552437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.164 [2024-07-15 09:38:46.566612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:52.421 I/O targets: 00:12:52.421 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:52.421 00:12:52.421 00:12:52.421 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.421 http://cunit.sourceforge.net/ 00:12:52.421 00:12:52.421 00:12:52.421 Suite: bdevio tests on: Nvme1n1 00:12:52.421 Test: blockdev write read block ...passed 00:12:52.421 Test: blockdev write zeroes read block ...passed 00:12:52.421 Test: blockdev write zeroes read no split ...passed 00:12:52.421 Test: blockdev write zeroes read split ...passed 00:12:52.421 Test: blockdev write zeroes read split partial ...passed 00:12:52.421 Test: blockdev reset ...[2024-07-15 09:38:46.776968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:52.421 [2024-07-15 09:38:46.777144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a99870 (9): Bad file descriptor 00:12:52.421 passed 00:12:52.421 Test: blockdev write read 8 blocks ...[2024-07-15 09:38:46.793059] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:52.421 passed 00:12:52.421 Test: blockdev write read size > 128k ...passed 00:12:52.421 Test: blockdev write read invalid size ...passed 00:12:52.421 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:52.421 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:52.421 Test: blockdev write read max offset ...passed 00:12:52.421 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:52.421 Test: blockdev writev readv 8 blocks ...passed 00:12:52.421 Test: blockdev writev readv 30 x 1block ...passed 00:12:52.421 Test: blockdev writev readv block ...passed 00:12:52.421 Test: blockdev writev readv size > 128k ...passed 00:12:52.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:52.421 Test: blockdev comparev and writev ...[2024-07-15 09:38:46.801284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.801332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.801353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.801364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.801711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.801733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.801751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.801761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.802059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.802079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.802096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.802107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.802399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.802415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.802432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:52.421 [2024-07-15 09:38:46.802442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:52.421 passed 00:12:52.421 Test: blockdev nvme passthru rw ...passed 00:12:52.421 Test: blockdev nvme passthru vendor specific ...[2024-07-15 09:38:46.803272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.421 [2024-07-15 09:38:46.803298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.803402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.421 [2024-07-15 09:38:46.803419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.803527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.421 [2024-07-15 09:38:46.803542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:52.421 [2024-07-15 09:38:46.803648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.421 [2024-07-15 09:38:46.803664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:52.421 passed 00:12:52.421 Test: blockdev nvme admin passthru ...passed 00:12:52.421 Test: blockdev copy ...passed 00:12:52.421 00:12:52.421 Run Summary: Type Total Ran Passed Failed Inactive 00:12:52.421 suites 1 1 n/a 0 0 00:12:52.421 tests 23 23 23 0 0 00:12:52.421 asserts 152 152 152 0 n/a 00:12:52.421 00:12:52.421 Elapsed time = 0.170 seconds 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.986 rmmod nvme_tcp 00:12:52.986 rmmod nvme_fabrics 00:12:52.986 rmmod nvme_keyring 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72871 ']' 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72871 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72871 ']' 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72871 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72871 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:12:52.986 killing process with pid 72871 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72871' 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72871 00:12:52.986 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72871 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:53.553 00:12:53.553 real 0m3.192s 00:12:53.553 user 0m10.445s 00:12:53.553 sys 0m1.276s 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.553 09:38:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:53.553 ************************************ 00:12:53.553 END TEST nvmf_bdevio_no_huge 00:12:53.553 ************************************ 00:12:53.553 09:38:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:53.553 09:38:47 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:53.553 09:38:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.553 09:38:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.553 09:38:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:53.553 ************************************ 00:12:53.553 START TEST nvmf_tls 00:12:53.553 ************************************ 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:53.553 * Looking for test storage... 00:12:53.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.553 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:53.554 Cannot find device "nvmf_tgt_br" 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.554 Cannot find device "nvmf_tgt_br2" 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:53.554 09:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:53.554 Cannot find device "nvmf_tgt_br" 00:12:53.554 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:12:53.554 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:53.554 Cannot find device "nvmf_tgt_br2" 00:12:53.554 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:12:53.554 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:53.812 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:53.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:12:53.813 00:12:53.813 --- 10.0.0.2 ping statistics --- 00:12:53.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.813 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:53.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:53.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:12:53.813 00:12:53.813 --- 10.0.0.3 ping statistics --- 00:12:53.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.813 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:53.813 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:54.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:12:54.071 00:12:54.071 --- 10.0.0.1 ping statistics --- 00:12:54.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.071 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73097 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73097 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73097 ']' 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.071 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.072 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.072 09:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.072 [2024-07-15 09:38:48.363334] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:12:54.072 [2024-07-15 09:38:48.363442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.072 [2024-07-15 09:38:48.506012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.342 [2024-07-15 09:38:48.629304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.342 [2024-07-15 09:38:48.629377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.342 [2024-07-15 09:38:48.629403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.342 [2024-07-15 09:38:48.629424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.342 [2024-07-15 09:38:48.629440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.342 [2024-07-15 09:38:48.629497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.941 09:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.942 09:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:12:54.942 09:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.942 09:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:54.942 09:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:54.942 09:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.942 09:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:12:54.942 09:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:55.508 true 00:12:55.508 09:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:55.508 09:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:12:55.766 09:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:12:55.766 09:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:12:55.766 09:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:56.024 09:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:12:56.024 09:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:56.282 09:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:12:56.282 09:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:12:56.282 09:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:56.541 09:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:56.541 09:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:12:56.799 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:12:56.799 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:12:56.799 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:56.799 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:12:57.057 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:12:57.057 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:12:57.057 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:57.316 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:12:57.316 09:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:57.574 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:12:57.574 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:12:57.574 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:12:58.213 09:38:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.7nYAX2CnCW 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.cZG6vP4sA3 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.7nYAX2CnCW 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.cZG6vP4sA3 00:12:58.471 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:58.729 09:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:58.988 [2024-07-15 09:38:53.216882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:58.988 09:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.7nYAX2CnCW 00:12:58.988 09:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.7nYAX2CnCW 00:12:58.988 09:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:59.245 [2024-07-15 09:38:53.520786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.245 09:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:59.503 09:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:59.761 [2024-07-15 09:38:53.976873] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:59.761 [2024-07-15 09:38:53.977113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.761 09:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:59.761 malloc0 00:13:00.020 09:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:00.020 09:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7nYAX2CnCW 00:13:00.278 [2024-07-15 09:38:54.720383] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:00.278 09:38:54 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7nYAX2CnCW 00:13:12.475 Initializing NVMe Controllers 00:13:12.475 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:12.475 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:12.475 Initialization complete. Launching workers. 00:13:12.475 ======================================================== 00:13:12.475 Latency(us) 00:13:12.475 Device Information : IOPS MiB/s Average min max 00:13:12.475 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9624.26 37.59 6651.40 1789.49 10963.00 00:13:12.475 ======================================================== 00:13:12.475 Total : 9624.26 37.59 6651.40 1789.49 10963.00 00:13:12.475 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7nYAX2CnCW 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7nYAX2CnCW' 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73328 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73328 /var/tmp/bdevperf.sock 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73328 ']' 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:12.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.475 09:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.475 [2024-07-15 09:39:04.994089] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:12.475 [2024-07-15 09:39:04.994185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73328 ] 00:13:12.475 [2024-07-15 09:39:05.133544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.475 [2024-07-15 09:39:05.258240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.475 [2024-07-15 09:39:05.312056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:12.475 09:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.475 09:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:12.475 09:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7nYAX2CnCW 00:13:12.475 [2024-07-15 09:39:05.591822] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:12.475 [2024-07-15 09:39:05.591975] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:12.475 TLSTESTn1 00:13:12.475 09:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:12.475 Running I/O for 10 seconds... 00:13:22.543 00:13:22.543 Latency(us) 00:13:22.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.543 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:22.543 Verification LBA range: start 0x0 length 0x2000 00:13:22.543 TLSTESTn1 : 10.02 3926.74 15.34 0.00 0.00 32530.07 7298.33 27644.28 00:13:22.543 =================================================================================================================== 00:13:22.543 Total : 3926.74 15.34 0.00 0.00 32530.07 7298.33 27644.28 00:13:22.543 0 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73328 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73328 ']' 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73328 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73328 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:22.543 killing process with pid 73328 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73328' 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73328 00:13:22.543 Received shutdown signal, test time was about 10.000000 seconds 00:13:22.543 00:13:22.543 Latency(us) 00:13:22.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.543 =================================================================================================================== 00:13:22.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:22.543 [2024-07-15 09:39:15.841810] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:22.543 09:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73328 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cZG6vP4sA3 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cZG6vP4sA3 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cZG6vP4sA3 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cZG6vP4sA3' 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73454 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73454 /var/tmp/bdevperf.sock 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73454 ']' 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.543 09:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.543 [2024-07-15 09:39:16.123094] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:22.543 [2024-07-15 09:39:16.123191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73454 ] 00:13:22.544 [2024-07-15 09:39:16.258604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.544 [2024-07-15 09:39:16.379372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.544 [2024-07-15 09:39:16.433135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:22.803 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.803 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:22.803 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cZG6vP4sA3 00:13:23.061 [2024-07-15 09:39:17.333662] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.061 [2024-07-15 09:39:17.333799] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:23.061 [2024-07-15 09:39:17.345681] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:23.061 [2024-07-15 09:39:17.346418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a31f0 (107): Transport endpoint is not connected 00:13:23.062 [2024-07-15 09:39:17.347405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a31f0 (9): Bad file descriptor 00:13:23.062 [2024-07-15 09:39:17.348400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:23.062 [2024-07-15 09:39:17.348426] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:23.062 [2024-07-15 09:39:17.348441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:23.062 request: 00:13:23.062 { 00:13:23.062 "name": "TLSTEST", 00:13:23.062 "trtype": "tcp", 00:13:23.062 "traddr": "10.0.0.2", 00:13:23.062 "adrfam": "ipv4", 00:13:23.062 "trsvcid": "4420", 00:13:23.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:23.062 "prchk_reftag": false, 00:13:23.062 "prchk_guard": false, 00:13:23.062 "hdgst": false, 00:13:23.062 "ddgst": false, 00:13:23.062 "psk": "/tmp/tmp.cZG6vP4sA3", 00:13:23.062 "method": "bdev_nvme_attach_controller", 00:13:23.062 "req_id": 1 00:13:23.062 } 00:13:23.062 Got JSON-RPC error response 00:13:23.062 response: 00:13:23.062 { 00:13:23.062 "code": -5, 00:13:23.062 "message": "Input/output error" 00:13:23.062 } 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73454 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73454 ']' 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73454 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73454 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73454' 00:13:23.062 killing process with pid 73454 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73454 00:13:23.062 Received shutdown signal, test time was about 10.000000 seconds 00:13:23.062 00:13:23.062 Latency(us) 00:13:23.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.062 =================================================================================================================== 00:13:23.062 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.062 [2024-07-15 09:39:17.393053] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:23.062 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73454 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7nYAX2CnCW 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7nYAX2CnCW 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7nYAX2CnCW 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7nYAX2CnCW' 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73476 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73476 /var/tmp/bdevperf.sock 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73476 ']' 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.320 09:39:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.320 [2024-07-15 09:39:17.666338] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:23.320 [2024-07-15 09:39:17.666436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73476 ] 00:13:23.578 [2024-07-15 09:39:17.798313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.578 [2024-07-15 09:39:17.941888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.578 [2024-07-15 09:39:18.022397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:24.511 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.511 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:24.511 09:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.7nYAX2CnCW 00:13:24.511 [2024-07-15 09:39:18.926110] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:24.511 [2024-07-15 09:39:18.926254] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:24.511 [2024-07-15 09:39:18.935604] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:24.511 [2024-07-15 09:39:18.935659] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:24.511 [2024-07-15 09:39:18.935716] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:24.511 [2024-07-15 09:39:18.935785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d431f0 (107): Transport endpoint is not connected 00:13:24.511 [2024-07-15 09:39:18.936774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d431f0 (9): Bad file descriptor 00:13:24.511 [2024-07-15 09:39:18.937770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:24.511 [2024-07-15 09:39:18.937798] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:24.511 [2024-07-15 09:39:18.937820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:24.511 request: 00:13:24.511 { 00:13:24.511 "name": "TLSTEST", 00:13:24.511 "trtype": "tcp", 00:13:24.511 "traddr": "10.0.0.2", 00:13:24.511 "adrfam": "ipv4", 00:13:24.511 "trsvcid": "4420", 00:13:24.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.511 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:24.511 "prchk_reftag": false, 00:13:24.511 "prchk_guard": false, 00:13:24.511 "hdgst": false, 00:13:24.511 "ddgst": false, 00:13:24.511 "psk": "/tmp/tmp.7nYAX2CnCW", 00:13:24.511 "method": "bdev_nvme_attach_controller", 00:13:24.511 "req_id": 1 00:13:24.511 } 00:13:24.511 Got JSON-RPC error response 00:13:24.511 response: 00:13:24.511 { 00:13:24.511 "code": -5, 00:13:24.511 "message": "Input/output error" 00:13:24.511 } 00:13:24.511 09:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73476 00:13:24.511 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73476 ']' 00:13:24.511 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73476 00:13:24.512 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:24.512 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.512 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73476 00:13:24.770 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:24.770 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:24.770 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73476' 00:13:24.770 killing process with pid 73476 00:13:24.770 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73476 00:13:24.770 Received shutdown signal, test time was about 10.000000 seconds 00:13:24.770 00:13:24.770 Latency(us) 00:13:24.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.770 =================================================================================================================== 00:13:24.770 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:24.770 [2024-07-15 09:39:18.983947] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:24.770 09:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73476 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7nYAX2CnCW 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7nYAX2CnCW 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7nYAX2CnCW 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.7nYAX2CnCW' 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73508 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73508 /var/tmp/bdevperf.sock 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73508 ']' 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.770 09:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:25.028 [2024-07-15 09:39:19.280395] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:25.028 [2024-07-15 09:39:19.280546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73508 ] 00:13:25.028 [2024-07-15 09:39:19.421115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.293 [2024-07-15 09:39:19.553966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.293 [2024-07-15 09:39:19.606878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:25.886 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:25.886 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:25.886 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7nYAX2CnCW 00:13:26.173 [2024-07-15 09:39:20.521990] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:26.173 [2024-07-15 09:39:20.522130] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:26.173 [2024-07-15 09:39:20.526905] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:26.173 [2024-07-15 09:39:20.526950] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:26.173 [2024-07-15 09:39:20.527006] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:26.173 [2024-07-15 09:39:20.527611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea51f0 (107): Transport endpoint is not connected 00:13:26.173 [2024-07-15 09:39:20.528596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea51f0 (9): Bad file descriptor 00:13:26.173 [2024-07-15 09:39:20.529593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:26.173 [2024-07-15 09:39:20.529622] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:26.173 [2024-07-15 09:39:20.529637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:26.173 request: 00:13:26.173 { 00:13:26.173 "name": "TLSTEST", 00:13:26.173 "trtype": "tcp", 00:13:26.173 "traddr": "10.0.0.2", 00:13:26.173 "adrfam": "ipv4", 00:13:26.173 "trsvcid": "4420", 00:13:26.173 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:26.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:26.173 "prchk_reftag": false, 00:13:26.173 "prchk_guard": false, 00:13:26.173 "hdgst": false, 00:13:26.173 "ddgst": false, 00:13:26.173 "psk": "/tmp/tmp.7nYAX2CnCW", 00:13:26.173 "method": "bdev_nvme_attach_controller", 00:13:26.173 "req_id": 1 00:13:26.173 } 00:13:26.173 Got JSON-RPC error response 00:13:26.173 response: 00:13:26.173 { 00:13:26.173 "code": -5, 00:13:26.173 "message": "Input/output error" 00:13:26.173 } 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73508 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73508 ']' 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73508 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73508 00:13:26.173 killing process with pid 73508 00:13:26.173 Received shutdown signal, test time was about 10.000000 seconds 00:13:26.173 00:13:26.173 Latency(us) 00:13:26.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.173 =================================================================================================================== 00:13:26.173 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73508' 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73508 00:13:26.173 [2024-07-15 09:39:20.570077] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:26.173 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73508 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:26.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73537 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.443 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73537 /var/tmp/bdevperf.sock 00:13:26.444 09:39:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:26.444 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73537 ']' 00:13:26.444 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.444 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.444 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.444 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.444 09:39:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.444 [2024-07-15 09:39:20.844692] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:26.444 [2024-07-15 09:39:20.844809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73537 ] 00:13:26.701 [2024-07-15 09:39:20.981378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.701 [2024-07-15 09:39:21.098680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.701 [2024-07-15 09:39:21.151116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:27.657 09:39:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.657 09:39:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:27.657 09:39:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:27.657 [2024-07-15 09:39:22.116095] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:27.657 [2024-07-15 09:39:22.117538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2c00 (9): Bad file descriptor 00:13:27.657 [2024-07-15 09:39:22.118533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:27.657 [2024-07-15 09:39:22.118570] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:27.657 [2024-07-15 09:39:22.118596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:27.657 request: 00:13:27.657 { 00:13:27.657 "name": "TLSTEST", 00:13:27.657 "trtype": "tcp", 00:13:27.657 "traddr": "10.0.0.2", 00:13:27.657 "adrfam": "ipv4", 00:13:27.657 "trsvcid": "4420", 00:13:27.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:27.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:27.657 "prchk_reftag": false, 00:13:27.657 "prchk_guard": false, 00:13:27.657 "hdgst": false, 00:13:27.657 "ddgst": false, 00:13:27.657 "method": "bdev_nvme_attach_controller", 00:13:27.657 "req_id": 1 00:13:27.657 } 00:13:27.657 Got JSON-RPC error response 00:13:27.657 response: 00:13:27.657 { 00:13:27.657 "code": -5, 00:13:27.657 "message": "Input/output error" 00:13:27.657 } 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73537 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73537 ']' 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73537 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73537 00:13:27.915 killing process with pid 73537 00:13:27.915 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.915 00:13:27.915 Latency(us) 00:13:27.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.915 =================================================================================================================== 00:13:27.915 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73537' 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73537 00:13:27.915 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73537 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 73097 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73097 ']' 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73097 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73097 00:13:28.174 killing process with pid 73097 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73097' 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73097 00:13:28.174 [2024-07-15 09:39:22.410229] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:28.174 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73097 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.iUSK22Zx90 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.iUSK22Zx90 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73574 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73574 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73574 ']' 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:28.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.433 09:39:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:28.433 [2024-07-15 09:39:22.766679] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:28.433 [2024-07-15 09:39:22.766787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.692 [2024-07-15 09:39:22.903150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.692 [2024-07-15 09:39:23.019687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.692 [2024-07-15 09:39:23.019755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.692 [2024-07-15 09:39:23.019767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.692 [2024-07-15 09:39:23.019776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.692 [2024-07-15 09:39:23.019783] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.692 [2024-07-15 09:39:23.019811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.692 [2024-07-15 09:39:23.072554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.iUSK22Zx90 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iUSK22Zx90 00:13:29.647 09:39:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:29.647 [2024-07-15 09:39:24.059849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.647 09:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:30.213 09:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:30.213 [2024-07-15 09:39:24.599939] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:30.213 [2024-07-15 09:39:24.600182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.213 09:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:30.471 malloc0 00:13:30.471 09:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:30.730 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iUSK22Zx90 00:13:30.989 [2024-07-15 09:39:25.327360] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iUSK22Zx90 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iUSK22Zx90' 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73629 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73629 /var/tmp/bdevperf.sock 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73629 ']' 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.989 09:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.989 [2024-07-15 09:39:25.396398] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:30.989 [2024-07-15 09:39:25.396498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73629 ] 00:13:31.247 [2024-07-15 09:39:25.532031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.247 [2024-07-15 09:39:25.650784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.247 [2024-07-15 09:39:25.705875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:32.182 09:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.182 09:39:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:32.182 09:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iUSK22Zx90 00:13:32.182 [2024-07-15 09:39:26.598824] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:32.182 [2024-07-15 09:39:26.598979] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:32.440 TLSTESTn1 00:13:32.440 09:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:32.440 Running I/O for 10 seconds... 00:13:42.411 00:13:42.411 Latency(us) 00:13:42.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.411 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:42.411 Verification LBA range: start 0x0 length 0x2000 00:13:42.411 TLSTESTn1 : 10.02 3765.47 14.71 0.00 0.00 33927.92 6762.12 35746.91 00:13:42.411 =================================================================================================================== 00:13:42.411 Total : 3765.47 14.71 0.00 0.00 33927.92 6762.12 35746.91 00:13:42.411 0 00:13:42.411 09:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:42.411 09:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73629 00:13:42.411 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73629 ']' 00:13:42.411 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73629 00:13:42.411 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:42.412 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.412 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73629 00:13:42.412 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:42.412 killing process with pid 73629 00:13:42.412 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:42.412 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73629' 00:13:42.412 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.412 00:13:42.412 Latency(us) 00:13:42.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.412 =================================================================================================================== 00:13:42.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.412 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73629 00:13:42.412 [2024-07-15 09:39:36.863415] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:42.412 09:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73629 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.iUSK22Zx90 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iUSK22Zx90 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iUSK22Zx90 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iUSK22Zx90 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iUSK22Zx90' 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73764 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73764 /var/tmp/bdevperf.sock 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73764 ']' 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.670 09:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.928 [2024-07-15 09:39:37.169777] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:42.928 [2024-07-15 09:39:37.169919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73764 ] 00:13:42.928 [2024-07-15 09:39:37.310016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.186 [2024-07-15 09:39:37.430146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.186 [2024-07-15 09:39:37.484520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:43.751 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.751 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:43.751 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iUSK22Zx90 00:13:44.009 [2024-07-15 09:39:38.337056] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:44.009 [2024-07-15 09:39:38.337141] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:44.009 [2024-07-15 09:39:38.337161] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.iUSK22Zx90 00:13:44.009 request: 00:13:44.009 { 00:13:44.009 "name": "TLSTEST", 00:13:44.009 "trtype": "tcp", 00:13:44.009 "traddr": "10.0.0.2", 00:13:44.009 "adrfam": "ipv4", 00:13:44.009 "trsvcid": "4420", 00:13:44.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:44.009 "prchk_reftag": false, 00:13:44.009 "prchk_guard": false, 00:13:44.009 "hdgst": false, 00:13:44.009 "ddgst": false, 00:13:44.009 "psk": "/tmp/tmp.iUSK22Zx90", 00:13:44.009 "method": "bdev_nvme_attach_controller", 00:13:44.009 "req_id": 1 00:13:44.009 } 00:13:44.009 Got JSON-RPC error response 00:13:44.009 response: 00:13:44.009 { 00:13:44.009 "code": -1, 00:13:44.009 "message": "Operation not permitted" 00:13:44.009 } 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73764 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73764 ']' 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73764 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73764 00:13:44.009 killing process with pid 73764 00:13:44.009 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.009 00:13:44.009 Latency(us) 00:13:44.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.009 =================================================================================================================== 00:13:44.009 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73764' 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73764 00:13:44.009 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73764 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73574 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73574 ']' 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73574 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73574 00:13:44.268 killing process with pid 73574 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73574' 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73574 00:13:44.268 [2024-07-15 09:39:38.631792] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:44.268 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73574 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73792 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73792 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73792 ']' 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.526 09:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.526 [2024-07-15 09:39:38.967838] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:44.526 [2024-07-15 09:39:38.967988] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.784 [2024-07-15 09:39:39.115266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.784 [2024-07-15 09:39:39.248682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.784 [2024-07-15 09:39:39.248753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.784 [2024-07-15 09:39:39.248768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.784 [2024-07-15 09:39:39.248779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.784 [2024-07-15 09:39:39.248790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.784 [2024-07-15 09:39:39.248821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.042 [2024-07-15 09:39:39.308515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.iUSK22Zx90 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.iUSK22Zx90 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.iUSK22Zx90 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iUSK22Zx90 00:13:45.608 09:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:45.865 [2024-07-15 09:39:40.207426] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.865 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:46.123 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:46.380 [2024-07-15 09:39:40.775521] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.380 [2024-07-15 09:39:40.775766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.380 09:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:46.638 malloc0 00:13:46.638 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:46.895 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iUSK22Zx90 00:13:47.152 [2024-07-15 09:39:41.499214] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:47.152 [2024-07-15 09:39:41.499272] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:47.152 [2024-07-15 09:39:41.499308] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:47.152 request: 00:13:47.152 { 00:13:47.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.152 "host": "nqn.2016-06.io.spdk:host1", 00:13:47.152 "psk": "/tmp/tmp.iUSK22Zx90", 00:13:47.152 "method": "nvmf_subsystem_add_host", 00:13:47.152 "req_id": 1 00:13:47.152 } 00:13:47.152 Got JSON-RPC error response 00:13:47.152 response: 00:13:47.152 { 00:13:47.152 "code": -32603, 00:13:47.152 "message": "Internal error" 00:13:47.152 } 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73792 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73792 ']' 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73792 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:47.152 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.153 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73792 00:13:47.153 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:47.153 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:47.153 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73792' 00:13:47.153 killing process with pid 73792 00:13:47.153 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73792 00:13:47.153 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73792 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.iUSK22Zx90 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73859 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73859 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73859 ']' 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.410 09:39:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.410 [2024-07-15 09:39:41.872721] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:47.410 [2024-07-15 09:39:41.872823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.667 [2024-07-15 09:39:42.014950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.924 [2024-07-15 09:39:42.143011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.924 [2024-07-15 09:39:42.143075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.924 [2024-07-15 09:39:42.143088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.924 [2024-07-15 09:39:42.143097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.924 [2024-07-15 09:39:42.143105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.924 [2024-07-15 09:39:42.143143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.924 [2024-07-15 09:39:42.199771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.iUSK22Zx90 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iUSK22Zx90 00:13:48.489 09:39:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:48.747 [2024-07-15 09:39:43.177501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.747 09:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:49.307 09:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:49.307 [2024-07-15 09:39:43.701639] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:49.307 [2024-07-15 09:39:43.701964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.307 09:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:49.563 malloc0 00:13:49.563 09:39:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.821 09:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iUSK22Zx90 00:13:50.078 [2024-07-15 09:39:44.421069] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:50.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73914 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73914 /var/tmp/bdevperf.sock 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73914 ']' 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.078 09:39:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.078 [2024-07-15 09:39:44.493113] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:50.078 [2024-07-15 09:39:44.493240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73914 ] 00:13:50.336 [2024-07-15 09:39:44.629545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.336 [2024-07-15 09:39:44.750831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.593 [2024-07-15 09:39:44.807280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.159 09:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.159 09:39:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:51.159 09:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iUSK22Zx90 00:13:51.417 [2024-07-15 09:39:45.745234] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.417 [2024-07-15 09:39:45.745386] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:51.417 TLSTESTn1 00:13:51.417 09:39:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:51.981 09:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:51.981 "subsystems": [ 00:13:51.981 { 00:13:51.981 "subsystem": "keyring", 00:13:51.981 "config": [] 00:13:51.981 }, 00:13:51.981 { 00:13:51.981 "subsystem": "iobuf", 00:13:51.981 "config": [ 00:13:51.981 { 00:13:51.981 "method": "iobuf_set_options", 00:13:51.981 "params": { 00:13:51.981 "small_pool_count": 8192, 00:13:51.981 "large_pool_count": 1024, 00:13:51.981 "small_bufsize": 8192, 00:13:51.981 "large_bufsize": 135168 00:13:51.981 } 00:13:51.981 } 00:13:51.981 ] 00:13:51.981 }, 00:13:51.981 { 00:13:51.981 "subsystem": "sock", 00:13:51.981 "config": [ 00:13:51.981 { 00:13:51.981 "method": "sock_set_default_impl", 00:13:51.981 "params": { 00:13:51.981 "impl_name": "uring" 00:13:51.981 } 00:13:51.981 }, 00:13:51.981 { 00:13:51.982 "method": "sock_impl_set_options", 00:13:51.982 "params": { 00:13:51.982 "impl_name": "ssl", 00:13:51.982 "recv_buf_size": 4096, 00:13:51.982 "send_buf_size": 4096, 00:13:51.982 "enable_recv_pipe": true, 00:13:51.982 "enable_quickack": false, 00:13:51.982 "enable_placement_id": 0, 00:13:51.982 "enable_zerocopy_send_server": true, 00:13:51.982 "enable_zerocopy_send_client": false, 00:13:51.982 "zerocopy_threshold": 0, 00:13:51.982 "tls_version": 0, 00:13:51.982 "enable_ktls": false 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "sock_impl_set_options", 00:13:51.982 "params": { 00:13:51.982 "impl_name": "posix", 00:13:51.982 "recv_buf_size": 2097152, 00:13:51.982 "send_buf_size": 2097152, 00:13:51.982 "enable_recv_pipe": true, 00:13:51.982 "enable_quickack": false, 00:13:51.982 "enable_placement_id": 0, 00:13:51.982 "enable_zerocopy_send_server": true, 00:13:51.982 "enable_zerocopy_send_client": false, 00:13:51.982 "zerocopy_threshold": 0, 00:13:51.982 "tls_version": 0, 00:13:51.982 "enable_ktls": false 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "sock_impl_set_options", 00:13:51.982 "params": { 00:13:51.982 "impl_name": "uring", 00:13:51.982 "recv_buf_size": 2097152, 00:13:51.982 "send_buf_size": 2097152, 00:13:51.982 "enable_recv_pipe": true, 00:13:51.982 "enable_quickack": false, 00:13:51.982 "enable_placement_id": 0, 00:13:51.982 "enable_zerocopy_send_server": false, 00:13:51.982 "enable_zerocopy_send_client": false, 00:13:51.982 "zerocopy_threshold": 0, 00:13:51.982 "tls_version": 0, 00:13:51.982 "enable_ktls": false 00:13:51.982 } 00:13:51.982 } 00:13:51.982 ] 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "subsystem": "vmd", 00:13:51.982 "config": [] 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "subsystem": "accel", 00:13:51.982 "config": [ 00:13:51.982 { 00:13:51.982 "method": "accel_set_options", 00:13:51.982 "params": { 00:13:51.982 "small_cache_size": 128, 00:13:51.982 "large_cache_size": 16, 00:13:51.982 "task_count": 2048, 00:13:51.982 "sequence_count": 2048, 00:13:51.982 "buf_count": 2048 00:13:51.982 } 00:13:51.982 } 00:13:51.982 ] 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "subsystem": "bdev", 00:13:51.982 "config": [ 00:13:51.982 { 00:13:51.982 "method": "bdev_set_options", 00:13:51.982 "params": { 00:13:51.982 "bdev_io_pool_size": 65535, 00:13:51.982 "bdev_io_cache_size": 256, 00:13:51.982 "bdev_auto_examine": true, 00:13:51.982 "iobuf_small_cache_size": 128, 00:13:51.982 "iobuf_large_cache_size": 16 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "bdev_raid_set_options", 00:13:51.982 "params": { 00:13:51.982 "process_window_size_kb": 1024 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "bdev_iscsi_set_options", 00:13:51.982 "params": { 00:13:51.982 "timeout_sec": 30 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "bdev_nvme_set_options", 00:13:51.982 "params": { 00:13:51.982 "action_on_timeout": "none", 00:13:51.982 "timeout_us": 0, 00:13:51.982 "timeout_admin_us": 0, 00:13:51.982 "keep_alive_timeout_ms": 10000, 00:13:51.982 "arbitration_burst": 0, 00:13:51.982 "low_priority_weight": 0, 00:13:51.982 "medium_priority_weight": 0, 00:13:51.982 "high_priority_weight": 0, 00:13:51.982 "nvme_adminq_poll_period_us": 10000, 00:13:51.982 "nvme_ioq_poll_period_us": 0, 00:13:51.982 "io_queue_requests": 0, 00:13:51.982 "delay_cmd_submit": true, 00:13:51.982 "transport_retry_count": 4, 00:13:51.982 "bdev_retry_count": 3, 00:13:51.982 "transport_ack_timeout": 0, 00:13:51.982 "ctrlr_loss_timeout_sec": 0, 00:13:51.982 "reconnect_delay_sec": 0, 00:13:51.982 "fast_io_fail_timeout_sec": 0, 00:13:51.982 "disable_auto_failback": false, 00:13:51.982 "generate_uuids": false, 00:13:51.982 "transport_tos": 0, 00:13:51.982 "nvme_error_stat": false, 00:13:51.982 "rdma_srq_size": 0, 00:13:51.982 "io_path_stat": false, 00:13:51.982 "allow_accel_sequence": false, 00:13:51.982 "rdma_max_cq_size": 0, 00:13:51.982 "rdma_cm_event_timeout_ms": 0, 00:13:51.982 "dhchap_digests": [ 00:13:51.982 "sha256", 00:13:51.982 "sha384", 00:13:51.982 "sha512" 00:13:51.982 ], 00:13:51.982 "dhchap_dhgroups": [ 00:13:51.982 "null", 00:13:51.982 "ffdhe2048", 00:13:51.982 "ffdhe3072", 00:13:51.982 "ffdhe4096", 00:13:51.982 "ffdhe6144", 00:13:51.982 "ffdhe8192" 00:13:51.982 ] 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "bdev_nvme_set_hotplug", 00:13:51.982 "params": { 00:13:51.982 "period_us": 100000, 00:13:51.982 "enable": false 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "bdev_malloc_create", 00:13:51.982 "params": { 00:13:51.982 "name": "malloc0", 00:13:51.982 "num_blocks": 8192, 00:13:51.982 "block_size": 4096, 00:13:51.982 "physical_block_size": 4096, 00:13:51.982 "uuid": "307ce83a-0f23-474c-957f-b2a28623ea70", 00:13:51.982 "optimal_io_boundary": 0 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "bdev_wait_for_examine" 00:13:51.982 } 00:13:51.982 ] 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "subsystem": "nbd", 00:13:51.982 "config": [] 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "subsystem": "scheduler", 00:13:51.982 "config": [ 00:13:51.982 { 00:13:51.982 "method": "framework_set_scheduler", 00:13:51.982 "params": { 00:13:51.982 "name": "static" 00:13:51.982 } 00:13:51.982 } 00:13:51.982 ] 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "subsystem": "nvmf", 00:13:51.982 "config": [ 00:13:51.982 { 00:13:51.982 "method": "nvmf_set_config", 00:13:51.982 "params": { 00:13:51.982 "discovery_filter": "match_any", 00:13:51.982 "admin_cmd_passthru": { 00:13:51.982 "identify_ctrlr": false 00:13:51.982 } 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "nvmf_set_max_subsystems", 00:13:51.982 "params": { 00:13:51.982 "max_subsystems": 1024 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "nvmf_set_crdt", 00:13:51.982 "params": { 00:13:51.982 "crdt1": 0, 00:13:51.982 "crdt2": 0, 00:13:51.982 "crdt3": 0 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "nvmf_create_transport", 00:13:51.982 "params": { 00:13:51.982 "trtype": "TCP", 00:13:51.982 "max_queue_depth": 128, 00:13:51.982 "max_io_qpairs_per_ctrlr": 127, 00:13:51.982 "in_capsule_data_size": 4096, 00:13:51.982 "max_io_size": 131072, 00:13:51.982 "io_unit_size": 131072, 00:13:51.982 "max_aq_depth": 128, 00:13:51.982 "num_shared_buffers": 511, 00:13:51.982 "buf_cache_size": 4294967295, 00:13:51.982 "dif_insert_or_strip": false, 00:13:51.982 "zcopy": false, 00:13:51.982 "c2h_success": false, 00:13:51.982 "sock_priority": 0, 00:13:51.982 "abort_timeout_sec": 1, 00:13:51.982 "ack_timeout": 0, 00:13:51.982 "data_wr_pool_size": 0 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "nvmf_create_subsystem", 00:13:51.982 "params": { 00:13:51.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.982 "allow_any_host": false, 00:13:51.982 "serial_number": "SPDK00000000000001", 00:13:51.982 "model_number": "SPDK bdev Controller", 00:13:51.982 "max_namespaces": 10, 00:13:51.982 "min_cntlid": 1, 00:13:51.982 "max_cntlid": 65519, 00:13:51.982 "ana_reporting": false 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "nvmf_subsystem_add_host", 00:13:51.982 "params": { 00:13:51.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.982 "host": "nqn.2016-06.io.spdk:host1", 00:13:51.982 "psk": "/tmp/tmp.iUSK22Zx90" 00:13:51.982 } 00:13:51.982 }, 00:13:51.982 { 00:13:51.982 "method": "nvmf_subsystem_add_ns", 00:13:51.982 "params": { 00:13:51.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.982 "namespace": { 00:13:51.982 "nsid": 1, 00:13:51.982 "bdev_name": "malloc0", 00:13:51.982 "nguid": "307CE83A0F23474C957FB2A28623EA70", 00:13:51.982 "uuid": "307ce83a-0f23-474c-957f-b2a28623ea70", 00:13:51.982 "no_auto_visible": false 00:13:51.983 } 00:13:51.983 } 00:13:51.983 }, 00:13:51.983 { 00:13:51.983 "method": "nvmf_subsystem_add_listener", 00:13:51.983 "params": { 00:13:51.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.983 "listen_address": { 00:13:51.983 "trtype": "TCP", 00:13:51.983 "adrfam": "IPv4", 00:13:51.983 "traddr": "10.0.0.2", 00:13:51.983 "trsvcid": "4420" 00:13:51.983 }, 00:13:51.983 "secure_channel": true 00:13:51.983 } 00:13:51.983 } 00:13:51.983 ] 00:13:51.983 } 00:13:51.983 ] 00:13:51.983 }' 00:13:51.983 09:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:52.241 09:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:52.241 "subsystems": [ 00:13:52.241 { 00:13:52.241 "subsystem": "keyring", 00:13:52.241 "config": [] 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "subsystem": "iobuf", 00:13:52.241 "config": [ 00:13:52.241 { 00:13:52.241 "method": "iobuf_set_options", 00:13:52.241 "params": { 00:13:52.241 "small_pool_count": 8192, 00:13:52.241 "large_pool_count": 1024, 00:13:52.241 "small_bufsize": 8192, 00:13:52.241 "large_bufsize": 135168 00:13:52.241 } 00:13:52.241 } 00:13:52.241 ] 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "subsystem": "sock", 00:13:52.241 "config": [ 00:13:52.241 { 00:13:52.241 "method": "sock_set_default_impl", 00:13:52.241 "params": { 00:13:52.241 "impl_name": "uring" 00:13:52.241 } 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "method": "sock_impl_set_options", 00:13:52.241 "params": { 00:13:52.241 "impl_name": "ssl", 00:13:52.241 "recv_buf_size": 4096, 00:13:52.241 "send_buf_size": 4096, 00:13:52.241 "enable_recv_pipe": true, 00:13:52.241 "enable_quickack": false, 00:13:52.241 "enable_placement_id": 0, 00:13:52.241 "enable_zerocopy_send_server": true, 00:13:52.241 "enable_zerocopy_send_client": false, 00:13:52.241 "zerocopy_threshold": 0, 00:13:52.241 "tls_version": 0, 00:13:52.241 "enable_ktls": false 00:13:52.241 } 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "method": "sock_impl_set_options", 00:13:52.241 "params": { 00:13:52.241 "impl_name": "posix", 00:13:52.241 "recv_buf_size": 2097152, 00:13:52.241 "send_buf_size": 2097152, 00:13:52.241 "enable_recv_pipe": true, 00:13:52.241 "enable_quickack": false, 00:13:52.241 "enable_placement_id": 0, 00:13:52.241 "enable_zerocopy_send_server": true, 00:13:52.241 "enable_zerocopy_send_client": false, 00:13:52.241 "zerocopy_threshold": 0, 00:13:52.241 "tls_version": 0, 00:13:52.241 "enable_ktls": false 00:13:52.241 } 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "method": "sock_impl_set_options", 00:13:52.241 "params": { 00:13:52.241 "impl_name": "uring", 00:13:52.241 "recv_buf_size": 2097152, 00:13:52.241 "send_buf_size": 2097152, 00:13:52.241 "enable_recv_pipe": true, 00:13:52.241 "enable_quickack": false, 00:13:52.241 "enable_placement_id": 0, 00:13:52.241 "enable_zerocopy_send_server": false, 00:13:52.241 "enable_zerocopy_send_client": false, 00:13:52.241 "zerocopy_threshold": 0, 00:13:52.241 "tls_version": 0, 00:13:52.241 "enable_ktls": false 00:13:52.241 } 00:13:52.241 } 00:13:52.241 ] 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "subsystem": "vmd", 00:13:52.241 "config": [] 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "subsystem": "accel", 00:13:52.241 "config": [ 00:13:52.241 { 00:13:52.241 "method": "accel_set_options", 00:13:52.241 "params": { 00:13:52.241 "small_cache_size": 128, 00:13:52.241 "large_cache_size": 16, 00:13:52.241 "task_count": 2048, 00:13:52.241 "sequence_count": 2048, 00:13:52.241 "buf_count": 2048 00:13:52.241 } 00:13:52.241 } 00:13:52.241 ] 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "subsystem": "bdev", 00:13:52.241 "config": [ 00:13:52.241 { 00:13:52.241 "method": "bdev_set_options", 00:13:52.241 "params": { 00:13:52.241 "bdev_io_pool_size": 65535, 00:13:52.241 "bdev_io_cache_size": 256, 00:13:52.241 "bdev_auto_examine": true, 00:13:52.241 "iobuf_small_cache_size": 128, 00:13:52.241 "iobuf_large_cache_size": 16 00:13:52.241 } 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "method": "bdev_raid_set_options", 00:13:52.241 "params": { 00:13:52.241 "process_window_size_kb": 1024 00:13:52.241 } 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "method": "bdev_iscsi_set_options", 00:13:52.241 "params": { 00:13:52.241 "timeout_sec": 30 00:13:52.241 } 00:13:52.241 }, 00:13:52.241 { 00:13:52.241 "method": "bdev_nvme_set_options", 00:13:52.241 "params": { 00:13:52.241 "action_on_timeout": "none", 00:13:52.241 "timeout_us": 0, 00:13:52.241 "timeout_admin_us": 0, 00:13:52.241 "keep_alive_timeout_ms": 10000, 00:13:52.241 "arbitration_burst": 0, 00:13:52.241 "low_priority_weight": 0, 00:13:52.241 "medium_priority_weight": 0, 00:13:52.241 "high_priority_weight": 0, 00:13:52.241 "nvme_adminq_poll_period_us": 10000, 00:13:52.241 "nvme_ioq_poll_period_us": 0, 00:13:52.241 "io_queue_requests": 512, 00:13:52.242 "delay_cmd_submit": true, 00:13:52.242 "transport_retry_count": 4, 00:13:52.242 "bdev_retry_count": 3, 00:13:52.242 "transport_ack_timeout": 0, 00:13:52.242 "ctrlr_loss_timeout_sec": 0, 00:13:52.242 "reconnect_delay_sec": 0, 00:13:52.242 "fast_io_fail_timeout_sec": 0, 00:13:52.242 "disable_auto_failback": false, 00:13:52.242 "generate_uuids": false, 00:13:52.242 "transport_tos": 0, 00:13:52.242 "nvme_error_stat": false, 00:13:52.242 "rdma_srq_size": 0, 00:13:52.242 "io_path_stat": false, 00:13:52.242 "allow_accel_sequence": false, 00:13:52.242 "rdma_max_cq_size": 0, 00:13:52.242 "rdma_cm_event_timeout_ms": 0, 00:13:52.242 "dhchap_digests": [ 00:13:52.242 "sha256", 00:13:52.242 "sha384", 00:13:52.242 "sha512" 00:13:52.242 ], 00:13:52.242 "dhchap_dhgroups": [ 00:13:52.242 "null", 00:13:52.242 "ffdhe2048", 00:13:52.242 "ffdhe3072", 00:13:52.242 "ffdhe4096", 00:13:52.242 "ffdhe6144", 00:13:52.242 "ffdhe8192" 00:13:52.242 ] 00:13:52.242 } 00:13:52.242 }, 00:13:52.242 { 00:13:52.242 "method": "bdev_nvme_attach_controller", 00:13:52.242 "params": { 00:13:52.242 "name": "TLSTEST", 00:13:52.242 "trtype": "TCP", 00:13:52.242 "adrfam": "IPv4", 00:13:52.242 "traddr": "10.0.0.2", 00:13:52.242 "trsvcid": "4420", 00:13:52.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.242 "prchk_reftag": false, 00:13:52.242 "prchk_guard": false, 00:13:52.242 "ctrlr_loss_timeout_sec": 0, 00:13:52.242 "reconnect_delay_sec": 0, 00:13:52.242 "fast_io_fail_timeout_sec": 0, 00:13:52.242 "psk": "/tmp/tmp.iUSK22Zx90", 00:13:52.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.242 "hdgst": false, 00:13:52.242 "ddgst": false 00:13:52.242 } 00:13:52.242 }, 00:13:52.242 { 00:13:52.242 "method": "bdev_nvme_set_hotplug", 00:13:52.242 "params": { 00:13:52.242 "period_us": 100000, 00:13:52.242 "enable": false 00:13:52.242 } 00:13:52.242 }, 00:13:52.242 { 00:13:52.242 "method": "bdev_wait_for_examine" 00:13:52.242 } 00:13:52.242 ] 00:13:52.242 }, 00:13:52.242 { 00:13:52.242 "subsystem": "nbd", 00:13:52.242 "config": [] 00:13:52.242 } 00:13:52.242 ] 00:13:52.242 }' 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73914 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73914 ']' 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73914 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73914 00:13:52.242 killing process with pid 73914 00:13:52.242 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.242 00:13:52.242 Latency(us) 00:13:52.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.242 =================================================================================================================== 00:13:52.242 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73914' 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73914 00:13:52.242 [2024-07-15 09:39:46.527387] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:52.242 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73914 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73859 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73859 ']' 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73859 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73859 00:13:52.501 killing process with pid 73859 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73859' 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73859 00:13:52.501 [2024-07-15 09:39:46.791392] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:52.501 09:39:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73859 00:13:52.759 09:39:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:52.759 09:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.759 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.759 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.759 09:39:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:52.759 "subsystems": [ 00:13:52.759 { 00:13:52.759 "subsystem": "keyring", 00:13:52.759 "config": [] 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "subsystem": "iobuf", 00:13:52.759 "config": [ 00:13:52.759 { 00:13:52.759 "method": "iobuf_set_options", 00:13:52.759 "params": { 00:13:52.759 "small_pool_count": 8192, 00:13:52.759 "large_pool_count": 1024, 00:13:52.759 "small_bufsize": 8192, 00:13:52.759 "large_bufsize": 135168 00:13:52.759 } 00:13:52.759 } 00:13:52.759 ] 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "subsystem": "sock", 00:13:52.759 "config": [ 00:13:52.759 { 00:13:52.759 "method": "sock_set_default_impl", 00:13:52.759 "params": { 00:13:52.759 "impl_name": "uring" 00:13:52.759 } 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "method": "sock_impl_set_options", 00:13:52.759 "params": { 00:13:52.759 "impl_name": "ssl", 00:13:52.759 "recv_buf_size": 4096, 00:13:52.759 "send_buf_size": 4096, 00:13:52.759 "enable_recv_pipe": true, 00:13:52.759 "enable_quickack": false, 00:13:52.759 "enable_placement_id": 0, 00:13:52.759 "enable_zerocopy_send_server": true, 00:13:52.759 "enable_zerocopy_send_client": false, 00:13:52.759 "zerocopy_threshold": 0, 00:13:52.759 "tls_version": 0, 00:13:52.759 "enable_ktls": false 00:13:52.759 } 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "method": "sock_impl_set_options", 00:13:52.759 "params": { 00:13:52.759 "impl_name": "posix", 00:13:52.759 "recv_buf_size": 2097152, 00:13:52.759 "send_buf_size": 2097152, 00:13:52.759 "enable_recv_pipe": true, 00:13:52.759 "enable_quickack": false, 00:13:52.759 "enable_placement_id": 0, 00:13:52.759 "enable_zerocopy_send_server": true, 00:13:52.759 "enable_zerocopy_send_client": false, 00:13:52.759 "zerocopy_threshold": 0, 00:13:52.759 "tls_version": 0, 00:13:52.759 "enable_ktls": false 00:13:52.759 } 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "method": "sock_impl_set_options", 00:13:52.759 "params": { 00:13:52.759 "impl_name": "uring", 00:13:52.759 "recv_buf_size": 2097152, 00:13:52.759 "send_buf_size": 2097152, 00:13:52.759 "enable_recv_pipe": true, 00:13:52.759 "enable_quickack": false, 00:13:52.759 "enable_placement_id": 0, 00:13:52.759 "enable_zerocopy_send_server": false, 00:13:52.759 "enable_zerocopy_send_client": false, 00:13:52.759 "zerocopy_threshold": 0, 00:13:52.759 "tls_version": 0, 00:13:52.759 "enable_ktls": false 00:13:52.759 } 00:13:52.759 } 00:13:52.759 ] 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "subsystem": "vmd", 00:13:52.759 "config": [] 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "subsystem": "accel", 00:13:52.759 "config": [ 00:13:52.759 { 00:13:52.759 "method": "accel_set_options", 00:13:52.759 "params": { 00:13:52.759 "small_cache_size": 128, 00:13:52.759 "large_cache_size": 16, 00:13:52.759 "task_count": 2048, 00:13:52.759 "sequence_count": 2048, 00:13:52.759 "buf_count": 2048 00:13:52.759 } 00:13:52.759 } 00:13:52.759 ] 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "subsystem": "bdev", 00:13:52.759 "config": [ 00:13:52.759 { 00:13:52.759 "method": "bdev_set_options", 00:13:52.759 "params": { 00:13:52.759 "bdev_io_pool_size": 65535, 00:13:52.759 "bdev_io_cache_size": 256, 00:13:52.759 "bdev_auto_examine": true, 00:13:52.759 "iobuf_small_cache_size": 128, 00:13:52.759 "iobuf_large_cache_size": 16 00:13:52.759 } 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "method": "bdev_raid_set_options", 00:13:52.759 "params": { 00:13:52.759 "process_window_size_kb": 1024 00:13:52.759 } 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "method": "bdev_iscsi_set_options", 00:13:52.759 "params": { 00:13:52.759 "timeout_sec": 30 00:13:52.759 } 00:13:52.759 }, 00:13:52.759 { 00:13:52.759 "method": "bdev_nvme_set_options", 00:13:52.759 "params": { 00:13:52.759 "action_on_timeout": "none", 00:13:52.759 "timeout_us": 0, 00:13:52.759 "timeout_admin_us": 0, 00:13:52.759 "keep_alive_timeout_ms": 10000, 00:13:52.759 "arbitration_burst": 0, 00:13:52.759 "low_priority_weight": 0, 00:13:52.759 "medium_priority_weight": 0, 00:13:52.759 "high_priority_weight": 0, 00:13:52.759 "nvme_adminq_poll_period_us": 10000, 00:13:52.759 "nvme_ioq_poll_period_us": 0, 00:13:52.759 "io_queue_requests": 0, 00:13:52.759 "delay_cmd_submit": true, 00:13:52.759 "transport_retry_count": 4, 00:13:52.759 "bdev_retry_count": 3, 00:13:52.759 "transport_ack_timeout": 0, 00:13:52.759 "ctrlr_loss_timeout_sec": 0, 00:13:52.759 "reconnect_delay_sec": 0, 00:13:52.759 "fast_io_fail_timeout_sec": 0, 00:13:52.759 "disable_auto_failback": false, 00:13:52.760 "generate_uuids": false, 00:13:52.760 "transport_tos": 0, 00:13:52.760 "nvme_error_stat": false, 00:13:52.760 "rdma_srq_size": 0, 00:13:52.760 "io_path_stat": false, 00:13:52.760 "allow_accel_sequence": false, 00:13:52.760 "rdma_max_cq_size": 0, 00:13:52.760 "rdma_cm_event_timeout_ms": 0, 00:13:52.760 "dhchap_digests": [ 00:13:52.760 "sha256", 00:13:52.760 "sha384", 00:13:52.760 "sha512" 00:13:52.760 ], 00:13:52.760 "dhchap_dhgroups": [ 00:13:52.760 "null", 00:13:52.760 "ffdhe2048", 00:13:52.760 "ffdhe3072", 00:13:52.760 "ffdhe4096", 00:13:52.760 "ffdhe6144", 00:13:52.760 "ffdhe8192" 00:13:52.760 ] 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "bdev_nvme_set_hotplug", 00:13:52.760 "params": { 00:13:52.760 "period_us": 100000, 00:13:52.760 "enable": false 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "bdev_malloc_create", 00:13:52.760 "params": { 00:13:52.760 "name": "malloc0", 00:13:52.760 "num_blocks": 8192, 00:13:52.760 "block_size": 4096, 00:13:52.760 "physical_block_size": 4096, 00:13:52.760 "uuid": "307ce83a-0f23-474c-957f-b2a28623ea70", 00:13:52.760 "optimal_io_boundary": 0 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "bdev_wait_for_examine" 00:13:52.760 } 00:13:52.760 ] 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "subsystem": "nbd", 00:13:52.760 "config": [] 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "subsystem": "scheduler", 00:13:52.760 "config": [ 00:13:52.760 { 00:13:52.760 "method": "framework_set_scheduler", 00:13:52.760 "params": { 00:13:52.760 "name": "static" 00:13:52.760 } 00:13:52.760 } 00:13:52.760 ] 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "subsystem": "nvmf", 00:13:52.760 "config": [ 00:13:52.760 { 00:13:52.760 "method": "nvmf_set_config", 00:13:52.760 "params": { 00:13:52.760 "discovery_filter": "match_any", 00:13:52.760 "admin_cmd_passthru": { 00:13:52.760 "identify_ctrlr": false 00:13:52.760 } 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "nvmf_set_max_subsystems", 00:13:52.760 "params": { 00:13:52.760 "max_subsystems": 1024 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "nvmf_set_crdt", 00:13:52.760 "params": { 00:13:52.760 "crdt1": 0, 00:13:52.760 "crdt2": 0, 00:13:52.760 "crdt3": 0 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "nvmf_create_transport", 00:13:52.760 "params": { 00:13:52.760 "trtype": "TCP", 00:13:52.760 "max_queue_depth": 128, 00:13:52.760 "max_io_qpairs_per_ctrlr": 127, 00:13:52.760 "in_capsule_data_size": 4096, 00:13:52.760 "max_io_size": 131072, 00:13:52.760 "io_unit_size": 131072, 00:13:52.760 "max_aq_depth": 128, 00:13:52.760 "num_shared_buffers": 511, 00:13:52.760 "buf_cache_size": 4294967295, 00:13:52.760 "dif_insert_or_strip": false, 00:13:52.760 "zcopy": false, 00:13:52.760 "c2h_success": false, 00:13:52.760 "sock_priority": 0, 00:13:52.760 "abort_timeout_sec": 1, 00:13:52.760 "ack_timeout": 0, 00:13:52.760 "data_wr_pool_size": 0 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "nvmf_create_subsystem", 00:13:52.760 "params": { 00:13:52.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.760 "allow_any_host": false, 00:13:52.760 "serial_number": "SPDK00000000000001", 00:13:52.760 "model_number": "SPDK bdev Controller", 00:13:52.760 "max_namespaces": 10, 00:13:52.760 "min_cntlid": 1, 00:13:52.760 "max_cntlid": 65519, 00:13:52.760 "ana_reporting": false 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "nvmf_subsystem_add_host", 00:13:52.760 "params": { 00:13:52.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.760 "host": "nqn.2016-06.io.spdk:host1", 00:13:52.760 "psk": "/tmp/tmp.iUSK22Zx90" 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "nvmf_subsystem_add_ns", 00:13:52.760 "params": { 00:13:52.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.760 "namespace": { 00:13:52.760 "nsid": 1, 00:13:52.760 "bdev_name": "malloc0", 00:13:52.760 "nguid": "307CE83A0F23474C957FB2A28623EA70", 00:13:52.760 "uuid": "307ce83a-0f23-474c-957f-b2a28623ea70", 00:13:52.760 "no_auto_visible": false 00:13:52.760 } 00:13:52.760 } 00:13:52.760 }, 00:13:52.760 { 00:13:52.760 "method": "nvmf_subsystem_add_listener", 00:13:52.760 "params": { 00:13:52.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.760 "listen_address": { 00:13:52.760 "trtype": "TCP", 00:13:52.760 "adrfam": "IPv4", 00:13:52.760 "traddr": "10.0.0.2", 00:13:52.760 "trsvcid": "4420" 00:13:52.760 }, 00:13:52.760 "secure_channel": true 00:13:52.760 } 00:13:52.760 } 00:13:52.760 ] 00:13:52.760 } 00:13:52.760 ] 00:13:52.760 }' 00:13:52.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73961 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73961 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73961 ']' 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.760 09:39:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.760 [2024-07-15 09:39:47.124250] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:52.760 [2024-07-15 09:39:47.124360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.018 [2024-07-15 09:39:47.264079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.018 [2024-07-15 09:39:47.387885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.018 [2024-07-15 09:39:47.387966] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.018 [2024-07-15 09:39:47.388002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.018 [2024-07-15 09:39:47.388014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.018 [2024-07-15 09:39:47.388025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.018 [2024-07-15 09:39:47.388173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.276 [2024-07-15 09:39:47.559775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:53.276 [2024-07-15 09:39:47.634949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.276 [2024-07-15 09:39:47.650913] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:53.276 [2024-07-15 09:39:47.666880] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:53.276 [2024-07-15 09:39:47.667259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73989 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73989 /var/tmp/bdevperf.sock 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73989 ']' 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:53.844 09:39:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:13:53.844 "subsystems": [ 00:13:53.844 { 00:13:53.844 "subsystem": "keyring", 00:13:53.844 "config": [] 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "subsystem": "iobuf", 00:13:53.844 "config": [ 00:13:53.844 { 00:13:53.844 "method": "iobuf_set_options", 00:13:53.844 "params": { 00:13:53.844 "small_pool_count": 8192, 00:13:53.844 "large_pool_count": 1024, 00:13:53.844 "small_bufsize": 8192, 00:13:53.844 "large_bufsize": 135168 00:13:53.844 } 00:13:53.844 } 00:13:53.844 ] 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "subsystem": "sock", 00:13:53.844 "config": [ 00:13:53.844 { 00:13:53.844 "method": "sock_set_default_impl", 00:13:53.844 "params": { 00:13:53.844 "impl_name": "uring" 00:13:53.844 } 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "method": "sock_impl_set_options", 00:13:53.844 "params": { 00:13:53.844 "impl_name": "ssl", 00:13:53.844 "recv_buf_size": 4096, 00:13:53.844 "send_buf_size": 4096, 00:13:53.844 "enable_recv_pipe": true, 00:13:53.844 "enable_quickack": false, 00:13:53.844 "enable_placement_id": 0, 00:13:53.844 "enable_zerocopy_send_server": true, 00:13:53.844 "enable_zerocopy_send_client": false, 00:13:53.844 "zerocopy_threshold": 0, 00:13:53.844 "tls_version": 0, 00:13:53.844 "enable_ktls": false 00:13:53.844 } 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "method": "sock_impl_set_options", 00:13:53.844 "params": { 00:13:53.844 "impl_name": "posix", 00:13:53.844 "recv_buf_size": 2097152, 00:13:53.844 "send_buf_size": 2097152, 00:13:53.844 "enable_recv_pipe": true, 00:13:53.844 "enable_quickack": false, 00:13:53.844 "enable_placement_id": 0, 00:13:53.844 "enable_zerocopy_send_server": true, 00:13:53.844 "enable_zerocopy_send_client": false, 00:13:53.844 "zerocopy_threshold": 0, 00:13:53.844 "tls_version": 0, 00:13:53.844 "enable_ktls": false 00:13:53.844 } 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "method": "sock_impl_set_options", 00:13:53.844 "params": { 00:13:53.844 "impl_name": "uring", 00:13:53.844 "recv_buf_size": 2097152, 00:13:53.844 "send_buf_size": 2097152, 00:13:53.844 "enable_recv_pipe": true, 00:13:53.844 "enable_quickack": false, 00:13:53.844 "enable_placement_id": 0, 00:13:53.844 "enable_zerocopy_send_server": false, 00:13:53.844 "enable_zerocopy_send_client": false, 00:13:53.844 "zerocopy_threshold": 0, 00:13:53.844 "tls_version": 0, 00:13:53.844 "enable_ktls": false 00:13:53.844 } 00:13:53.844 } 00:13:53.844 ] 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "subsystem": "vmd", 00:13:53.844 "config": [] 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "subsystem": "accel", 00:13:53.844 "config": [ 00:13:53.844 { 00:13:53.844 "method": "accel_set_options", 00:13:53.844 "params": { 00:13:53.844 "small_cache_size": 128, 00:13:53.844 "large_cache_size": 16, 00:13:53.844 "task_count": 2048, 00:13:53.844 "sequence_count": 2048, 00:13:53.844 "buf_count": 2048 00:13:53.844 } 00:13:53.844 } 00:13:53.844 ] 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "subsystem": "bdev", 00:13:53.844 "config": [ 00:13:53.844 { 00:13:53.844 "method": "bdev_set_options", 00:13:53.844 "params": { 00:13:53.844 "bdev_io_pool_size": 65535, 00:13:53.844 "bdev_io_cache_size": 256, 00:13:53.844 "bdev_auto_examine": true, 00:13:53.844 "iobuf_small_cache_size": 128, 00:13:53.844 "iobuf_large_cache_size": 16 00:13:53.844 } 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "method": "bdev_raid_set_options", 00:13:53.844 "params": { 00:13:53.844 "process_window_size_kb": 1024 00:13:53.844 } 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "method": "bdev_iscsi_set_options", 00:13:53.844 "params": { 00:13:53.844 "timeout_sec": 30 00:13:53.844 } 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "method": "bdev_nvme_set_options", 00:13:53.844 "params": { 00:13:53.844 "action_on_timeout": "none", 00:13:53.844 "timeout_us": 0, 00:13:53.844 "timeout_admin_us": 0, 00:13:53.844 "keep_alive_timeout_ms": 10000, 00:13:53.844 "arbitration_burst": 0, 00:13:53.844 "low_priority_weight": 0, 00:13:53.844 "medium_priority_weight": 0, 00:13:53.844 "high_priority_weight": 0, 00:13:53.844 "nvme_adminq_poll_period_us": 10000, 00:13:53.844 "nvme_ioq_poll_period_us": 0, 00:13:53.844 "io_queue_requests": 512, 00:13:53.844 "delay_cmd_submit": true, 00:13:53.844 "transport_retry_count": 4, 00:13:53.844 "bdev_retry_count": 3, 00:13:53.844 "transport_ack_timeout": 0, 00:13:53.844 "ctrlr_loss_timeout_sec": 0, 00:13:53.844 "reconnect_delay_sec": 0, 00:13:53.844 "fast_io_fail_timeout_sec": 0, 00:13:53.844 "disable_auto_failback": false, 00:13:53.844 "generate_uuids": false, 00:13:53.844 "transport_tos": 0, 00:13:53.844 "nvme_error_stat": false, 00:13:53.844 "rdma_srq_size": 0, 00:13:53.844 "io_path_stat": false, 00:13:53.844 "allow_accel_sequence": false, 00:13:53.844 "rdma_max_cq_size": 0, 00:13:53.844 "rdma_cm_event_timeout_ms": 0, 00:13:53.844 "dhchap_digests": [ 00:13:53.844 "sha256", 00:13:53.844 "sha384", 00:13:53.844 "sha512" 00:13:53.844 ], 00:13:53.844 "dhchap_dhgroups": [ 00:13:53.844 "null", 00:13:53.844 "ffdhe2048", 00:13:53.844 "ffdhe3072", 00:13:53.844 "ffdhe4096", 00:13:53.844 "ffdhe6144", 00:13:53.844 "ffdhe8192" 00:13:53.844 ] 00:13:53.844 } 00:13:53.844 }, 00:13:53.844 { 00:13:53.844 "method": "bdev_nvme_attach_controller", 00:13:53.844 "params": { 00:13:53.844 "name": "TLSTEST", 00:13:53.844 "trtype": "TCP", 00:13:53.844 "adrfam": "IPv4", 00:13:53.844 "traddr": "10.0.0.2", 00:13:53.844 "trsvcid": "4420", 00:13:53.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.844 "prchk_reftag": false, 00:13:53.844 "prchk_guard": false, 00:13:53.844 "ctrlr_loss_timeout_sec": 0, 00:13:53.844 "reconnect_delay_sec": 0, 00:13:53.844 "fast_io_fail_timeout_sec": 0, 00:13:53.844 "psk": "/tmp/tmp.iUSK22Zx90", 00:13:53.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.844 "hdgst": false, 00:13:53.844 "ddgst": false 00:13:53.844 } 00:13:53.844 }, 00:13:53.845 { 00:13:53.845 "method": "bdev_nvme_set_hotplug", 00:13:53.845 "params": { 00:13:53.845 "period_us": 100000, 00:13:53.845 "enable": false 00:13:53.845 } 00:13:53.845 }, 00:13:53.845 { 00:13:53.845 "method": "bdev_wait_for_examine" 00:13:53.845 } 00:13:53.845 ] 00:13:53.845 }, 00:13:53.845 { 00:13:53.845 "subsystem": "nbd", 00:13:53.845 "config": [] 00:13:53.845 } 00:13:53.845 ] 00:13:53.845 }' 00:13:53.845 [2024-07-15 09:39:48.226466] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:13:53.845 [2024-07-15 09:39:48.226569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73989 ] 00:13:54.103 [2024-07-15 09:39:48.365791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.103 [2024-07-15 09:39:48.493651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.362 [2024-07-15 09:39:48.632135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.362 [2024-07-15 09:39:48.673074] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.362 [2024-07-15 09:39:48.673598] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:54.927 09:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.927 09:39:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:54.928 09:39:49 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:54.928 Running I/O for 10 seconds... 00:14:07.124 00:14:07.124 Latency(us) 00:14:07.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.124 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:07.124 Verification LBA range: start 0x0 length 0x2000 00:14:07.124 TLSTESTn1 : 10.02 3971.83 15.51 0.00 0.00 32163.30 7626.01 25380.31 00:14:07.124 =================================================================================================================== 00:14:07.124 Total : 3971.83 15.51 0.00 0.00 32163.30 7626.01 25380.31 00:14:07.124 0 00:14:07.124 09:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.124 09:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73989 00:14:07.124 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73989 ']' 00:14:07.124 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73989 00:14:07.124 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.124 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.124 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73989 00:14:07.124 killing process with pid 73989 00:14:07.124 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.124 00:14:07.124 Latency(us) 00:14:07.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.125 =================================================================================================================== 00:14:07.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73989' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73989 00:14:07.125 [2024-07-15 09:39:59.422185] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73989 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73961 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73961 ']' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73961 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73961 00:14:07.125 killing process with pid 73961 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73961' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73961 00:14:07.125 [2024-07-15 09:39:59.681282] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73961 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74132 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74132 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74132 ']' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.125 09:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.125 [2024-07-15 09:39:59.987450] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:07.125 [2024-07-15 09:39:59.987737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.125 [2024-07-15 09:40:00.128353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.125 [2024-07-15 09:40:00.248798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.125 [2024-07-15 09:40:00.249151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.125 [2024-07-15 09:40:00.249335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.125 [2024-07-15 09:40:00.249547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.125 [2024-07-15 09:40:00.249593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.125 [2024-07-15 09:40:00.249761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.125 [2024-07-15 09:40:00.308293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:07.125 09:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.125 09:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:07.125 09:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:07.125 09:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:07.125 09:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.125 09:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.125 09:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.iUSK22Zx90 00:14:07.125 09:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.iUSK22Zx90 00:14:07.125 09:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:07.125 [2024-07-15 09:40:01.226908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.125 09:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:07.125 09:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:07.383 [2024-07-15 09:40:01.783066] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:07.383 [2024-07-15 09:40:01.783325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.383 09:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:07.641 malloc0 00:14:07.641 09:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:07.899 09:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iUSK22Zx90 00:14:08.158 [2024-07-15 09:40:02.487866] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:08.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74182 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74182 /var/tmp/bdevperf.sock 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74182 ']' 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.158 09:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.158 [2024-07-15 09:40:02.557273] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:08.158 [2024-07-15 09:40:02.557460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74182 ] 00:14:08.416 [2024-07-15 09:40:02.700293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.416 [2024-07-15 09:40:02.832552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.675 [2024-07-15 09:40:02.891823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.239 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:09.239 09:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:09.239 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iUSK22Zx90 00:14:09.497 09:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:09.754 [2024-07-15 09:40:03.979857] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:09.754 nvme0n1 00:14:09.754 09:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:09.754 Running I/O for 1 seconds... 00:14:11.128 00:14:11.128 Latency(us) 00:14:11.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.128 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:11.128 Verification LBA range: start 0x0 length 0x2000 00:14:11.128 nvme0n1 : 1.02 3757.93 14.68 0.00 0.00 33653.36 7387.69 20375.74 00:14:11.128 =================================================================================================================== 00:14:11.128 Total : 3757.93 14.68 0.00 0.00 33653.36 7387.69 20375.74 00:14:11.128 0 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74182 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74182 ']' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74182 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74182 00:14:11.128 killing process with pid 74182 00:14:11.128 Received shutdown signal, test time was about 1.000000 seconds 00:14:11.128 00:14:11.128 Latency(us) 00:14:11.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.128 =================================================================================================================== 00:14:11.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74182' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74182 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74182 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 74132 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74132 ']' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74132 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74132 00:14:11.128 killing process with pid 74132 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74132' 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74132 00:14:11.128 [2024-07-15 09:40:05.505248] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:11.128 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74132 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74238 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74238 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74238 ']' 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.387 09:40:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.387 [2024-07-15 09:40:05.821238] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:11.387 [2024-07-15 09:40:05.821331] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.646 [2024-07-15 09:40:05.958838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.646 [2024-07-15 09:40:06.067663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.646 [2024-07-15 09:40:06.067734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.646 [2024-07-15 09:40:06.067746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.646 [2024-07-15 09:40:06.067755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.646 [2024-07-15 09:40:06.067762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.646 [2024-07-15 09:40:06.067794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.904 [2024-07-15 09:40:06.125953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.471 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.471 [2024-07-15 09:40:06.890307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.471 malloc0 00:14:12.471 [2024-07-15 09:40:06.922570] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:12.471 [2024-07-15 09:40:06.922773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74271 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74271 /var/tmp/bdevperf.sock 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74271 ']' 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.730 09:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.730 [2024-07-15 09:40:07.006710] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:12.730 [2024-07-15 09:40:07.007007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74271 ] 00:14:12.730 [2024-07-15 09:40:07.148452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.988 [2024-07-15 09:40:07.271994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.988 [2024-07-15 09:40:07.330484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.553 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.553 09:40:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:13.553 09:40:07 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iUSK22Zx90 00:14:13.810 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:14.070 [2024-07-15 09:40:08.485998] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.328 nvme0n1 00:14:14.328 09:40:08 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:14.328 Running I/O for 1 seconds... 00:14:15.697 00:14:15.697 Latency(us) 00:14:15.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.697 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:15.697 Verification LBA range: start 0x0 length 0x2000 00:14:15.697 nvme0n1 : 1.02 3660.13 14.30 0.00 0.00 34427.20 4885.41 25141.99 00:14:15.697 =================================================================================================================== 00:14:15.697 Total : 3660.13 14.30 0.00 0.00 34427.20 4885.41 25141.99 00:14:15.697 0 00:14:15.697 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:15.697 09:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.697 09:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.697 09:40:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.697 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:15.697 "subsystems": [ 00:14:15.697 { 00:14:15.697 "subsystem": "keyring", 00:14:15.697 "config": [ 00:14:15.697 { 00:14:15.697 "method": "keyring_file_add_key", 00:14:15.697 "params": { 00:14:15.697 "name": "key0", 00:14:15.697 "path": "/tmp/tmp.iUSK22Zx90" 00:14:15.697 } 00:14:15.697 } 00:14:15.697 ] 00:14:15.697 }, 00:14:15.697 { 00:14:15.697 "subsystem": "iobuf", 00:14:15.697 "config": [ 00:14:15.697 { 00:14:15.697 "method": "iobuf_set_options", 00:14:15.697 "params": { 00:14:15.697 "small_pool_count": 8192, 00:14:15.697 "large_pool_count": 1024, 00:14:15.697 "small_bufsize": 8192, 00:14:15.697 "large_bufsize": 135168 00:14:15.697 } 00:14:15.697 } 00:14:15.697 ] 00:14:15.697 }, 00:14:15.697 { 00:14:15.697 "subsystem": "sock", 00:14:15.697 "config": [ 00:14:15.697 { 00:14:15.697 "method": "sock_set_default_impl", 00:14:15.697 "params": { 00:14:15.697 "impl_name": "uring" 00:14:15.697 } 00:14:15.697 }, 00:14:15.697 { 00:14:15.697 "method": "sock_impl_set_options", 00:14:15.697 "params": { 00:14:15.697 "impl_name": "ssl", 00:14:15.697 "recv_buf_size": 4096, 00:14:15.697 "send_buf_size": 4096, 00:14:15.697 "enable_recv_pipe": true, 00:14:15.697 "enable_quickack": false, 00:14:15.697 "enable_placement_id": 0, 00:14:15.697 "enable_zerocopy_send_server": true, 00:14:15.697 "enable_zerocopy_send_client": false, 00:14:15.697 "zerocopy_threshold": 0, 00:14:15.697 "tls_version": 0, 00:14:15.697 "enable_ktls": false 00:14:15.697 } 00:14:15.697 }, 00:14:15.697 { 00:14:15.697 "method": "sock_impl_set_options", 00:14:15.697 "params": { 00:14:15.698 "impl_name": "posix", 00:14:15.698 "recv_buf_size": 2097152, 00:14:15.698 "send_buf_size": 2097152, 00:14:15.698 "enable_recv_pipe": true, 00:14:15.698 "enable_quickack": false, 00:14:15.698 "enable_placement_id": 0, 00:14:15.698 "enable_zerocopy_send_server": true, 00:14:15.698 "enable_zerocopy_send_client": false, 00:14:15.698 "zerocopy_threshold": 0, 00:14:15.698 "tls_version": 0, 00:14:15.698 "enable_ktls": false 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "sock_impl_set_options", 00:14:15.698 "params": { 00:14:15.698 "impl_name": "uring", 00:14:15.698 "recv_buf_size": 2097152, 00:14:15.698 "send_buf_size": 2097152, 00:14:15.698 "enable_recv_pipe": true, 00:14:15.698 "enable_quickack": false, 00:14:15.698 "enable_placement_id": 0, 00:14:15.698 "enable_zerocopy_send_server": false, 00:14:15.698 "enable_zerocopy_send_client": false, 00:14:15.698 "zerocopy_threshold": 0, 00:14:15.698 "tls_version": 0, 00:14:15.698 "enable_ktls": false 00:14:15.698 } 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "subsystem": "vmd", 00:14:15.698 "config": [] 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "subsystem": "accel", 00:14:15.698 "config": [ 00:14:15.698 { 00:14:15.698 "method": "accel_set_options", 00:14:15.698 "params": { 00:14:15.698 "small_cache_size": 128, 00:14:15.698 "large_cache_size": 16, 00:14:15.698 "task_count": 2048, 00:14:15.698 "sequence_count": 2048, 00:14:15.698 "buf_count": 2048 00:14:15.698 } 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "subsystem": "bdev", 00:14:15.698 "config": [ 00:14:15.698 { 00:14:15.698 "method": "bdev_set_options", 00:14:15.698 "params": { 00:14:15.698 "bdev_io_pool_size": 65535, 00:14:15.698 "bdev_io_cache_size": 256, 00:14:15.698 "bdev_auto_examine": true, 00:14:15.698 "iobuf_small_cache_size": 128, 00:14:15.698 "iobuf_large_cache_size": 16 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "bdev_raid_set_options", 00:14:15.698 "params": { 00:14:15.698 "process_window_size_kb": 1024 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "bdev_iscsi_set_options", 00:14:15.698 "params": { 00:14:15.698 "timeout_sec": 30 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "bdev_nvme_set_options", 00:14:15.698 "params": { 00:14:15.698 "action_on_timeout": "none", 00:14:15.698 "timeout_us": 0, 00:14:15.698 "timeout_admin_us": 0, 00:14:15.698 "keep_alive_timeout_ms": 10000, 00:14:15.698 "arbitration_burst": 0, 00:14:15.698 "low_priority_weight": 0, 00:14:15.698 "medium_priority_weight": 0, 00:14:15.698 "high_priority_weight": 0, 00:14:15.698 "nvme_adminq_poll_period_us": 10000, 00:14:15.698 "nvme_ioq_poll_period_us": 0, 00:14:15.698 "io_queue_requests": 0, 00:14:15.698 "delay_cmd_submit": true, 00:14:15.698 "transport_retry_count": 4, 00:14:15.698 "bdev_retry_count": 3, 00:14:15.698 "transport_ack_timeout": 0, 00:14:15.698 "ctrlr_loss_timeout_sec": 0, 00:14:15.698 "reconnect_delay_sec": 0, 00:14:15.698 "fast_io_fail_timeout_sec": 0, 00:14:15.698 "disable_auto_failback": false, 00:14:15.698 "generate_uuids": false, 00:14:15.698 "transport_tos": 0, 00:14:15.698 "nvme_error_stat": false, 00:14:15.698 "rdma_srq_size": 0, 00:14:15.698 "io_path_stat": false, 00:14:15.698 "allow_accel_sequence": false, 00:14:15.698 "rdma_max_cq_size": 0, 00:14:15.698 "rdma_cm_event_timeout_ms": 0, 00:14:15.698 "dhchap_digests": [ 00:14:15.698 "sha256", 00:14:15.698 "sha384", 00:14:15.698 "sha512" 00:14:15.698 ], 00:14:15.698 "dhchap_dhgroups": [ 00:14:15.698 "null", 00:14:15.698 "ffdhe2048", 00:14:15.698 "ffdhe3072", 00:14:15.698 "ffdhe4096", 00:14:15.698 "ffdhe6144", 00:14:15.698 "ffdhe8192" 00:14:15.698 ] 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "bdev_nvme_set_hotplug", 00:14:15.698 "params": { 00:14:15.698 "period_us": 100000, 00:14:15.698 "enable": false 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "bdev_malloc_create", 00:14:15.698 "params": { 00:14:15.698 "name": "malloc0", 00:14:15.698 "num_blocks": 8192, 00:14:15.698 "block_size": 4096, 00:14:15.698 "physical_block_size": 4096, 00:14:15.698 "uuid": "ef8f101d-e05f-4190-82a2-4e0457380d1e", 00:14:15.698 "optimal_io_boundary": 0 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "bdev_wait_for_examine" 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "subsystem": "nbd", 00:14:15.698 "config": [] 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "subsystem": "scheduler", 00:14:15.698 "config": [ 00:14:15.698 { 00:14:15.698 "method": "framework_set_scheduler", 00:14:15.698 "params": { 00:14:15.698 "name": "static" 00:14:15.698 } 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "subsystem": "nvmf", 00:14:15.698 "config": [ 00:14:15.698 { 00:14:15.698 "method": "nvmf_set_config", 00:14:15.698 "params": { 00:14:15.698 "discovery_filter": "match_any", 00:14:15.698 "admin_cmd_passthru": { 00:14:15.698 "identify_ctrlr": false 00:14:15.698 } 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "nvmf_set_max_subsystems", 00:14:15.698 "params": { 00:14:15.698 "max_subsystems": 1024 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "nvmf_set_crdt", 00:14:15.698 "params": { 00:14:15.698 "crdt1": 0, 00:14:15.698 "crdt2": 0, 00:14:15.698 "crdt3": 0 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "nvmf_create_transport", 00:14:15.698 "params": { 00:14:15.698 "trtype": "TCP", 00:14:15.698 "max_queue_depth": 128, 00:14:15.698 "max_io_qpairs_per_ctrlr": 127, 00:14:15.698 "in_capsule_data_size": 4096, 00:14:15.698 "max_io_size": 131072, 00:14:15.698 "io_unit_size": 131072, 00:14:15.698 "max_aq_depth": 128, 00:14:15.698 "num_shared_buffers": 511, 00:14:15.698 "buf_cache_size": 4294967295, 00:14:15.698 "dif_insert_or_strip": false, 00:14:15.698 "zcopy": false, 00:14:15.698 "c2h_success": false, 00:14:15.698 "sock_priority": 0, 00:14:15.698 "abort_timeout_sec": 1, 00:14:15.698 "ack_timeout": 0, 00:14:15.698 "data_wr_pool_size": 0 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "nvmf_create_subsystem", 00:14:15.698 "params": { 00:14:15.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.698 "allow_any_host": false, 00:14:15.698 "serial_number": "00000000000000000000", 00:14:15.698 "model_number": "SPDK bdev Controller", 00:14:15.698 "max_namespaces": 32, 00:14:15.698 "min_cntlid": 1, 00:14:15.698 "max_cntlid": 65519, 00:14:15.698 "ana_reporting": false 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "nvmf_subsystem_add_host", 00:14:15.698 "params": { 00:14:15.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.698 "host": "nqn.2016-06.io.spdk:host1", 00:14:15.698 "psk": "key0" 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "nvmf_subsystem_add_ns", 00:14:15.698 "params": { 00:14:15.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.698 "namespace": { 00:14:15.698 "nsid": 1, 00:14:15.698 "bdev_name": "malloc0", 00:14:15.698 "nguid": "EF8F101DE05F419082A24E0457380D1E", 00:14:15.698 "uuid": "ef8f101d-e05f-4190-82a2-4e0457380d1e", 00:14:15.698 "no_auto_visible": false 00:14:15.698 } 00:14:15.698 } 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "method": "nvmf_subsystem_add_listener", 00:14:15.698 "params": { 00:14:15.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.698 "listen_address": { 00:14:15.698 "trtype": "TCP", 00:14:15.698 "adrfam": "IPv4", 00:14:15.698 "traddr": "10.0.0.2", 00:14:15.698 "trsvcid": "4420" 00:14:15.698 }, 00:14:15.698 "secure_channel": true 00:14:15.698 } 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 }' 00:14:15.698 09:40:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:15.956 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:15.956 "subsystems": [ 00:14:15.956 { 00:14:15.956 "subsystem": "keyring", 00:14:15.956 "config": [ 00:14:15.956 { 00:14:15.957 "method": "keyring_file_add_key", 00:14:15.957 "params": { 00:14:15.957 "name": "key0", 00:14:15.957 "path": "/tmp/tmp.iUSK22Zx90" 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "iobuf", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "iobuf_set_options", 00:14:15.957 "params": { 00:14:15.957 "small_pool_count": 8192, 00:14:15.957 "large_pool_count": 1024, 00:14:15.957 "small_bufsize": 8192, 00:14:15.957 "large_bufsize": 135168 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "sock", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "sock_set_default_impl", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "uring" 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "sock_impl_set_options", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "ssl", 00:14:15.957 "recv_buf_size": 4096, 00:14:15.957 "send_buf_size": 4096, 00:14:15.957 "enable_recv_pipe": true, 00:14:15.957 "enable_quickack": false, 00:14:15.957 "enable_placement_id": 0, 00:14:15.957 "enable_zerocopy_send_server": true, 00:14:15.957 "enable_zerocopy_send_client": false, 00:14:15.957 "zerocopy_threshold": 0, 00:14:15.957 "tls_version": 0, 00:14:15.957 "enable_ktls": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "sock_impl_set_options", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "posix", 00:14:15.957 "recv_buf_size": 2097152, 00:14:15.957 "send_buf_size": 2097152, 00:14:15.957 "enable_recv_pipe": true, 00:14:15.957 "enable_quickack": false, 00:14:15.957 "enable_placement_id": 0, 00:14:15.957 "enable_zerocopy_send_server": true, 00:14:15.957 "enable_zerocopy_send_client": false, 00:14:15.957 "zerocopy_threshold": 0, 00:14:15.957 "tls_version": 0, 00:14:15.957 "enable_ktls": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "sock_impl_set_options", 00:14:15.957 "params": { 00:14:15.957 "impl_name": "uring", 00:14:15.957 "recv_buf_size": 2097152, 00:14:15.957 "send_buf_size": 2097152, 00:14:15.957 "enable_recv_pipe": true, 00:14:15.957 "enable_quickack": false, 00:14:15.957 "enable_placement_id": 0, 00:14:15.957 "enable_zerocopy_send_server": false, 00:14:15.957 "enable_zerocopy_send_client": false, 00:14:15.957 "zerocopy_threshold": 0, 00:14:15.957 "tls_version": 0, 00:14:15.957 "enable_ktls": false 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "vmd", 00:14:15.957 "config": [] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "accel", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "accel_set_options", 00:14:15.957 "params": { 00:14:15.957 "small_cache_size": 128, 00:14:15.957 "large_cache_size": 16, 00:14:15.957 "task_count": 2048, 00:14:15.957 "sequence_count": 2048, 00:14:15.957 "buf_count": 2048 00:14:15.957 } 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "bdev", 00:14:15.957 "config": [ 00:14:15.957 { 00:14:15.957 "method": "bdev_set_options", 00:14:15.957 "params": { 00:14:15.957 "bdev_io_pool_size": 65535, 00:14:15.957 "bdev_io_cache_size": 256, 00:14:15.957 "bdev_auto_examine": true, 00:14:15.957 "iobuf_small_cache_size": 128, 00:14:15.957 "iobuf_large_cache_size": 16 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_raid_set_options", 00:14:15.957 "params": { 00:14:15.957 "process_window_size_kb": 1024 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_iscsi_set_options", 00:14:15.957 "params": { 00:14:15.957 "timeout_sec": 30 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_nvme_set_options", 00:14:15.957 "params": { 00:14:15.957 "action_on_timeout": "none", 00:14:15.957 "timeout_us": 0, 00:14:15.957 "timeout_admin_us": 0, 00:14:15.957 "keep_alive_timeout_ms": 10000, 00:14:15.957 "arbitration_burst": 0, 00:14:15.957 "low_priority_weight": 0, 00:14:15.957 "medium_priority_weight": 0, 00:14:15.957 "high_priority_weight": 0, 00:14:15.957 "nvme_adminq_poll_period_us": 10000, 00:14:15.957 "nvme_ioq_poll_period_us": 0, 00:14:15.957 "io_queue_requests": 512, 00:14:15.957 "delay_cmd_submit": true, 00:14:15.957 "transport_retry_count": 4, 00:14:15.957 "bdev_retry_count": 3, 00:14:15.957 "transport_ack_timeout": 0, 00:14:15.957 "ctrlr_loss_timeout_sec": 0, 00:14:15.957 "reconnect_delay_sec": 0, 00:14:15.957 "fast_io_fail_timeout_sec": 0, 00:14:15.957 "disable_auto_failback": false, 00:14:15.957 "generate_uuids": false, 00:14:15.957 "transport_tos": 0, 00:14:15.957 "nvme_error_stat": false, 00:14:15.957 "rdma_srq_size": 0, 00:14:15.957 "io_path_stat": false, 00:14:15.957 "allow_accel_sequence": false, 00:14:15.957 "rdma_max_cq_size": 0, 00:14:15.957 "rdma_cm_event_timeout_ms": 0, 00:14:15.957 "dhchap_digests": [ 00:14:15.957 "sha256", 00:14:15.957 "sha384", 00:14:15.957 "sha512" 00:14:15.957 ], 00:14:15.957 "dhchap_dhgroups": [ 00:14:15.957 "null", 00:14:15.957 "ffdhe2048", 00:14:15.957 "ffdhe3072", 00:14:15.957 "ffdhe4096", 00:14:15.957 "ffdhe6144", 00:14:15.957 "ffdhe8192" 00:14:15.957 ] 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_nvme_attach_controller", 00:14:15.957 "params": { 00:14:15.957 "name": "nvme0", 00:14:15.957 "trtype": "TCP", 00:14:15.957 "adrfam": "IPv4", 00:14:15.957 "traddr": "10.0.0.2", 00:14:15.957 "trsvcid": "4420", 00:14:15.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.957 "prchk_reftag": false, 00:14:15.957 "prchk_guard": false, 00:14:15.957 "ctrlr_loss_timeout_sec": 0, 00:14:15.957 "reconnect_delay_sec": 0, 00:14:15.957 "fast_io_fail_timeout_sec": 0, 00:14:15.957 "psk": "key0", 00:14:15.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:15.957 "hdgst": false, 00:14:15.957 "ddgst": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_nvme_set_hotplug", 00:14:15.957 "params": { 00:14:15.957 "period_us": 100000, 00:14:15.957 "enable": false 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_enable_histogram", 00:14:15.957 "params": { 00:14:15.957 "name": "nvme0n1", 00:14:15.957 "enable": true 00:14:15.957 } 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "method": "bdev_wait_for_examine" 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "subsystem": "nbd", 00:14:15.957 "config": [] 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }' 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74271 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74271 ']' 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74271 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74271 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.957 killing process with pid 74271 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74271' 00:14:15.957 Received shutdown signal, test time was about 1.000000 seconds 00:14:15.957 00:14:15.957 Latency(us) 00:14:15.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.957 =================================================================================================================== 00:14:15.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74271 00:14:15.957 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74271 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74238 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74238 ']' 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74238 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74238 00:14:16.215 killing process with pid 74238 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:16.215 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:16.216 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74238' 00:14:16.216 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74238 00:14:16.216 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74238 00:14:16.474 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:16.474 09:40:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:16.474 "subsystems": [ 00:14:16.474 { 00:14:16.474 "subsystem": "keyring", 00:14:16.474 "config": [ 00:14:16.474 { 00:14:16.474 "method": "keyring_file_add_key", 00:14:16.474 "params": { 00:14:16.474 "name": "key0", 00:14:16.474 "path": "/tmp/tmp.iUSK22Zx90" 00:14:16.474 } 00:14:16.474 } 00:14:16.474 ] 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "subsystem": "iobuf", 00:14:16.474 "config": [ 00:14:16.474 { 00:14:16.474 "method": "iobuf_set_options", 00:14:16.474 "params": { 00:14:16.474 "small_pool_count": 8192, 00:14:16.474 "large_pool_count": 1024, 00:14:16.474 "small_bufsize": 8192, 00:14:16.474 "large_bufsize": 135168 00:14:16.474 } 00:14:16.474 } 00:14:16.474 ] 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "subsystem": "sock", 00:14:16.474 "config": [ 00:14:16.474 { 00:14:16.474 "method": "sock_set_default_impl", 00:14:16.474 "params": { 00:14:16.474 "impl_name": "uring" 00:14:16.474 } 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "method": "sock_impl_set_options", 00:14:16.474 "params": { 00:14:16.474 "impl_name": "ssl", 00:14:16.474 "recv_buf_size": 4096, 00:14:16.474 "send_buf_size": 4096, 00:14:16.474 "enable_recv_pipe": true, 00:14:16.474 "enable_quickack": false, 00:14:16.474 "enable_placement_id": 0, 00:14:16.474 "enable_zerocopy_send_server": true, 00:14:16.474 "enable_zerocopy_send_client": false, 00:14:16.474 "zerocopy_threshold": 0, 00:14:16.474 "tls_version": 0, 00:14:16.474 "enable_ktls": false 00:14:16.474 } 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "method": "sock_impl_set_options", 00:14:16.474 "params": { 00:14:16.474 "impl_name": "posix", 00:14:16.474 "recv_buf_size": 2097152, 00:14:16.474 "send_buf_size": 2097152, 00:14:16.474 "enable_recv_pipe": true, 00:14:16.474 "enable_quickack": false, 00:14:16.474 "enable_placement_id": 0, 00:14:16.474 "enable_zerocopy_send_server": true, 00:14:16.474 "enable_zerocopy_send_client": false, 00:14:16.474 "zerocopy_threshold": 0, 00:14:16.474 "tls_version": 0, 00:14:16.474 "enable_ktls": false 00:14:16.474 } 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "method": "sock_impl_set_options", 00:14:16.474 "params": { 00:14:16.474 "impl_name": "uring", 00:14:16.474 "recv_buf_size": 2097152, 00:14:16.474 "send_buf_size": 2097152, 00:14:16.474 "enable_recv_pipe": true, 00:14:16.474 "enable_quickack": false, 00:14:16.474 "enable_placement_id": 0, 00:14:16.474 "enable_zerocopy_send_server": false, 00:14:16.474 "enable_zerocopy_send_client": false, 00:14:16.474 "zerocopy_threshold": 0, 00:14:16.474 "tls_version": 0, 00:14:16.474 "enable_ktls": false 00:14:16.474 } 00:14:16.474 } 00:14:16.474 ] 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "subsystem": "vmd", 00:14:16.474 "config": [] 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "subsystem": "accel", 00:14:16.474 "config": [ 00:14:16.474 { 00:14:16.474 "method": "accel_set_options", 00:14:16.474 "params": { 00:14:16.474 "small_cache_size": 128, 00:14:16.474 "large_cache_size": 16, 00:14:16.474 "task_count": 2048, 00:14:16.474 "sequence_count": 2048, 00:14:16.474 "buf_count": 2048 00:14:16.474 } 00:14:16.474 } 00:14:16.474 ] 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "subsystem": "bdev", 00:14:16.474 "config": [ 00:14:16.474 { 00:14:16.474 "method": "bdev_set_options", 00:14:16.474 "params": { 00:14:16.474 "bdev_io_pool_size": 65535, 00:14:16.474 "bdev_io_cache_size": 256, 00:14:16.474 "bdev_auto_examine": true, 00:14:16.474 "iobuf_small_cache_size": 128, 00:14:16.474 "iobuf_large_cache_size": 16 00:14:16.474 } 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "method": "bdev_raid_set_options", 00:14:16.474 "params": { 00:14:16.474 "process_window_size_kb": 1024 00:14:16.474 } 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "method": "bdev_iscsi_set_options", 00:14:16.474 "params": { 00:14:16.474 "timeout_sec": 30 00:14:16.474 } 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "method": "bdev_nvme_set_options", 00:14:16.474 "params": { 00:14:16.474 "action_on_timeout": "none", 00:14:16.474 "timeout_us": 0, 00:14:16.474 "timeout_admin_us": 0, 00:14:16.474 "keep_alive_timeout_ms": 10000, 00:14:16.475 "arbitration_burst": 0, 00:14:16.475 "low_priority_weight": 0, 00:14:16.475 "medium_priority_weight": 0, 00:14:16.475 "high_priority_weight": 0, 00:14:16.475 "nvme_adminq_poll_period_us": 10000, 00:14:16.475 "nvme_ioq_poll_period_us": 0, 00:14:16.475 "io_queue_requests": 0, 00:14:16.475 "delay_cmd_submit": true, 00:14:16.475 "transport_retry_count": 4, 00:14:16.475 "bdev_retry_count": 3, 00:14:16.475 "transport_ack_timeout": 0, 00:14:16.475 "ctrlr_loss_timeout_sec": 0, 00:14:16.475 "reconnect_delay_sec": 0, 00:14:16.475 "fast_io_fail_timeout_sec": 0, 00:14:16.475 "disable_auto_failback": false, 00:14:16.475 "generate_uuids": false, 00:14:16.475 "transport_tos": 0, 00:14:16.475 "nvme_error_stat": false, 00:14:16.475 "rdma_srq_size": 0, 00:14:16.475 "io_path_stat": false, 00:14:16.475 "allow_accel_sequence": false, 00:14:16.475 "rdma_max_cq_size": 0, 00:14:16.475 "rdma_cm_event_timeout_ms": 0, 00:14:16.475 "dhchap_digests": [ 00:14:16.475 "sha256", 00:14:16.475 "sha384", 00:14:16.475 "sha512" 00:14:16.475 ], 00:14:16.475 "dhchap_dhgroups": [ 00:14:16.475 "null", 00:14:16.475 "ffdhe2048", 00:14:16.475 "ffdhe3072", 00:14:16.475 "ffdhe4096", 00:14:16.475 "ffdhe6144", 00:14:16.475 "ffdhe8192" 00:14:16.475 ] 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "bdev_nvme_set_hotplug", 00:14:16.475 "params": { 00:14:16.475 "period_us": 100000, 00:14:16.475 "enable": false 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "bdev_malloc_create", 00:14:16.475 "params": { 00:14:16.475 "name": "malloc0", 00:14:16.475 "num_blocks": 8192, 00:14:16.475 "block_size": 4096, 00:14:16.475 "physical_block_size": 4096, 00:14:16.475 "uuid": "ef8f101d-e05f-4190-82a2-4e0457380d1e", 00:14:16.475 "optimal_io_boundary": 0 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "bdev_wait_for_examine" 00:14:16.475 } 00:14:16.475 ] 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "subsystem": "nbd", 00:14:16.475 "config": [] 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "subsystem": "scheduler", 00:14:16.475 "config": [ 00:14:16.475 { 00:14:16.475 "method": "framework_set_scheduler", 00:14:16.475 "params": { 00:14:16.475 "name": "static" 00:14:16.475 } 00:14:16.475 } 00:14:16.475 ] 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "subsystem": "nvmf", 00:14:16.475 "config": [ 00:14:16.475 { 00:14:16.475 "method": "nvmf_set_config", 00:14:16.475 "params": { 00:14:16.475 "discovery_filter": "match_any", 00:14:16.475 "admin_cmd_passthru": { 00:14:16.475 "identify_ctrlr": false 00:14:16.475 } 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "nvmf_set_max_subsystems", 00:14:16.475 "params": { 00:14:16.475 "max_subsystems": 1024 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "nvmf_set_crdt", 00:14:16.475 "params": { 00:14:16.475 "crdt1": 0, 00:14:16.475 "crdt2": 0, 00:14:16.475 "crdt3": 0 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "nvmf_create_transport", 00:14:16.475 "params": { 00:14:16.475 "trtype": "TCP", 00:14:16.475 "max_queue_depth": 128, 00:14:16.475 "max_io_qpairs_per_ctrlr": 127, 00:14:16.475 "in_capsule_data_size": 4096, 00:14:16.475 "max_io_size": 131072, 00:14:16.475 "io_unit_size": 131072, 00:14:16.475 "max_aq_depth": 128, 00:14:16.475 "num_shared_buffers": 511, 00:14:16.475 "buf_cache_size": 4294967295, 00:14:16.475 "dif_insert_or_strip": false, 00:14:16.475 "zcopy": false, 00:14:16.475 "c2h_success": false, 00:14:16.475 "sock_priority": 0, 00:14:16.475 "abort_timeout_sec": 1, 00:14:16.475 "ack_timeout": 0, 00:14:16.475 "data_wr_pool_size": 0 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "nvmf_create_subsystem", 00:14:16.475 "params": { 00:14:16.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.475 "allow_any_host": false, 00:14:16.475 "serial_number": "00000000000000000000", 00:14:16.475 "model_number": "SPDK bdev Controller", 00:14:16.475 "max_namespaces": 32, 00:14:16.475 "min_cntlid": 1, 00:14:16.475 "max_cntlid": 65519, 00:14:16.475 "ana_reporting": false 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "nvmf_subsystem_add_host", 00:14:16.475 "params": { 00:14:16.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.475 "host": "nqn.2016-06.io.spdk:host1", 00:14:16.475 "psk": "key0" 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "nvmf_subsystem_add_ns", 00:14:16.475 "params": { 00:14:16.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.475 "namespace": { 00:14:16.475 "nsid": 1, 00:14:16.475 "bdev_name": "malloc0", 00:14:16.475 "nguid": "EF8F101DE05F419082A24E0457380D1E", 00:14:16.475 "uuid": "ef8f101d-e05f-4190-82a2-4e0457380d1e", 00:14:16.475 "no_auto_visible": false 00:14:16.475 } 00:14:16.475 } 00:14:16.475 }, 00:14:16.475 { 00:14:16.475 "method": "nvmf_subsystem_add_listener", 00:14:16.475 "params": { 00:14:16.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.475 "listen_address": { 00:14:16.475 "trtype": "TCP", 00:14:16.475 "adrfam": "IPv4", 00:14:16.475 "traddr": "10.0.0.2", 00:14:16.475 "trsvcid": "4420" 00:14:16.475 }, 00:14:16.475 "secure_channel": true 00:14:16.475 } 00:14:16.475 } 00:14:16.475 ] 00:14:16.475 } 00:14:16.475 ] 00:14:16.475 }' 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74333 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74333 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74333 ']' 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.475 09:40:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.475 [2024-07-15 09:40:10.821289] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:16.475 [2024-07-15 09:40:10.821426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.734 [2024-07-15 09:40:10.963711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.734 [2024-07-15 09:40:11.084783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.734 [2024-07-15 09:40:11.084849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.734 [2024-07-15 09:40:11.084877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.734 [2024-07-15 09:40:11.084885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.734 [2024-07-15 09:40:11.084893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.734 [2024-07-15 09:40:11.085002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.992 [2024-07-15 09:40:11.255626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.992 [2024-07-15 09:40:11.338359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.992 [2024-07-15 09:40:11.370248] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:16.992 [2024-07-15 09:40:11.370500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74365 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74365 /var/tmp/bdevperf.sock 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74365 ']' 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.559 09:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:17.559 "subsystems": [ 00:14:17.559 { 00:14:17.559 "subsystem": "keyring", 00:14:17.559 "config": [ 00:14:17.559 { 00:14:17.559 "method": "keyring_file_add_key", 00:14:17.559 "params": { 00:14:17.559 "name": "key0", 00:14:17.559 "path": "/tmp/tmp.iUSK22Zx90" 00:14:17.559 } 00:14:17.559 } 00:14:17.559 ] 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "subsystem": "iobuf", 00:14:17.559 "config": [ 00:14:17.559 { 00:14:17.559 "method": "iobuf_set_options", 00:14:17.559 "params": { 00:14:17.559 "small_pool_count": 8192, 00:14:17.559 "large_pool_count": 1024, 00:14:17.559 "small_bufsize": 8192, 00:14:17.559 "large_bufsize": 135168 00:14:17.559 } 00:14:17.559 } 00:14:17.559 ] 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "subsystem": "sock", 00:14:17.559 "config": [ 00:14:17.559 { 00:14:17.559 "method": "sock_set_default_impl", 00:14:17.559 "params": { 00:14:17.559 "impl_name": "uring" 00:14:17.559 } 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "method": "sock_impl_set_options", 00:14:17.559 "params": { 00:14:17.559 "impl_name": "ssl", 00:14:17.559 "recv_buf_size": 4096, 00:14:17.559 "send_buf_size": 4096, 00:14:17.559 "enable_recv_pipe": true, 00:14:17.559 "enable_quickack": false, 00:14:17.559 "enable_placement_id": 0, 00:14:17.559 "enable_zerocopy_send_server": true, 00:14:17.559 "enable_zerocopy_send_client": false, 00:14:17.559 "zerocopy_threshold": 0, 00:14:17.559 "tls_version": 0, 00:14:17.559 "enable_ktls": false 00:14:17.559 } 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "method": "sock_impl_set_options", 00:14:17.559 "params": { 00:14:17.559 "impl_name": "posix", 00:14:17.559 "recv_buf_size": 2097152, 00:14:17.559 "send_buf_size": 2097152, 00:14:17.559 "enable_recv_pipe": true, 00:14:17.559 "enable_quickack": false, 00:14:17.559 "enable_placement_id": 0, 00:14:17.559 "enable_zerocopy_send_server": true, 00:14:17.559 "enable_zerocopy_send_client": false, 00:14:17.559 "zerocopy_threshold": 0, 00:14:17.559 "tls_version": 0, 00:14:17.559 "enable_ktls": false 00:14:17.559 } 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "method": "sock_impl_set_options", 00:14:17.559 "params": { 00:14:17.559 "impl_name": "uring", 00:14:17.559 "recv_buf_size": 2097152, 00:14:17.559 "send_buf_size": 2097152, 00:14:17.559 "enable_recv_pipe": true, 00:14:17.559 "enable_quickack": false, 00:14:17.559 "enable_placement_id": 0, 00:14:17.559 "enable_zerocopy_send_server": false, 00:14:17.559 "enable_zerocopy_send_client": false, 00:14:17.559 "zerocopy_threshold": 0, 00:14:17.559 "tls_version": 0, 00:14:17.559 "enable_ktls": false 00:14:17.559 } 00:14:17.559 } 00:14:17.559 ] 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "subsystem": "vmd", 00:14:17.559 "config": [] 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "subsystem": "accel", 00:14:17.559 "config": [ 00:14:17.559 { 00:14:17.559 "method": "accel_set_options", 00:14:17.559 "params": { 00:14:17.559 "small_cache_size": 128, 00:14:17.559 "large_cache_size": 16, 00:14:17.559 "task_count": 2048, 00:14:17.559 "sequence_count": 2048, 00:14:17.559 "buf_count": 2048 00:14:17.559 } 00:14:17.559 } 00:14:17.559 ] 00:14:17.559 }, 00:14:17.559 { 00:14:17.559 "subsystem": "bdev", 00:14:17.559 "config": [ 00:14:17.559 { 00:14:17.559 "method": "bdev_set_options", 00:14:17.559 "params": { 00:14:17.559 "bdev_io_pool_size": 65535, 00:14:17.559 "bdev_io_cache_size": 256, 00:14:17.559 "bdev_auto_examine": true, 00:14:17.559 "iobuf_small_cache_size": 128, 00:14:17.560 "iobuf_large_cache_size": 16 00:14:17.560 } 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "method": "bdev_raid_set_options", 00:14:17.560 "params": { 00:14:17.560 "process_window_size_kb": 1024 00:14:17.560 } 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "method": "bdev_iscsi_set_options", 00:14:17.560 "params": { 00:14:17.560 "timeout_sec": 30 00:14:17.560 } 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "method": "bdev_nvme_set_options", 00:14:17.560 "params": { 00:14:17.560 "action_on_timeout": "none", 00:14:17.560 "timeout_us": 0, 00:14:17.560 "timeout_admin_us": 0, 00:14:17.560 "keep_alive_timeout_ms": 10000, 00:14:17.560 "arbitration_burst": 0, 00:14:17.560 "low_priority_weight": 0, 00:14:17.560 "medium_priority_weight": 0, 00:14:17.560 "high_priority_weight": 0, 00:14:17.560 "nvme_adminq_poll_period_us": 10000, 00:14:17.560 "nvme_ioq_poll_period_us": 0, 00:14:17.560 "io_queue_requests": 512, 00:14:17.560 "delay_cmd_submit": true, 00:14:17.560 "transport_retry_count": 4, 00:14:17.560 "bdev_retry_count": 3, 00:14:17.560 "transport_ack_timeout": 0, 00:14:17.560 "ctrlr_loss_timeout_sec": 0, 00:14:17.560 "reconnect_delay_sec": 0, 00:14:17.560 "fast_io_fail_timeout_sec": 0, 00:14:17.560 "disable_auto_failback": false, 00:14:17.560 "generate_uuids": false, 00:14:17.560 "transport_tos": 0, 00:14:17.560 "nvme_error_stat": false, 00:14:17.560 "rdma_srq_size": 0, 00:14:17.560 "io_path_stat": false, 00:14:17.560 "allow_accel_sequence": false, 00:14:17.560 "rdma_max_cq_size": 0, 00:14:17.560 "rdma_cm_event_timeout_ms": 0, 00:14:17.560 "dhchap_digests": [ 00:14:17.560 "sha256", 00:14:17.560 "sha384", 00:14:17.560 "sha512" 00:14:17.560 ], 00:14:17.560 "dhchap_dhgroups": [ 00:14:17.560 "null", 00:14:17.560 "ffdhe2048", 00:14:17.560 "ffdhe3072", 00:14:17.560 "ffdhe4096", 00:14:17.560 "ffdhe6144", 00:14:17.560 "ffdhe8192" 00:14:17.560 ] 00:14:17.560 } 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "method": "bdev_nvme_attach_controller", 00:14:17.560 "params": { 00:14:17.560 "name": "nvme0", 00:14:17.560 "trtype": "TCP", 00:14:17.560 "adrfam": "IPv4", 00:14:17.560 "traddr": "10.0.0.2", 00:14:17.560 "trsvcid": "4420", 00:14:17.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.560 "prchk_reftag": false, 00:14:17.560 "prchk_guard": false, 00:14:17.560 "ctrlr_loss_timeout_sec": 0, 00:14:17.560 "reconnect_delay_sec": 0, 00:14:17.560 "fast_io_fail_timeout_sec": 0, 00:14:17.560 "psk": "key0", 00:14:17.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.560 "hdgst": false, 00:14:17.560 "ddgst": false 00:14:17.560 } 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "method": "bdev_nvme_set_hotplug", 00:14:17.560 "params": { 00:14:17.560 "period_us": 100000, 00:14:17.560 "enable": false 00:14:17.560 } 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "method": "bdev_enable_histogram", 00:14:17.560 "params": { 00:14:17.560 "name": "nvme0n1", 00:14:17.560 "enable": true 00:14:17.560 } 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "method": "bdev_wait_for_examine" 00:14:17.560 } 00:14:17.560 ] 00:14:17.560 }, 00:14:17.560 { 00:14:17.560 "subsystem": "nbd", 00:14:17.560 "config": [] 00:14:17.560 } 00:14:17.560 ] 00:14:17.560 }' 00:14:17.560 [2024-07-15 09:40:11.874238] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:17.560 [2024-07-15 09:40:11.874353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74365 ] 00:14:17.560 [2024-07-15 09:40:12.006152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.818 [2024-07-15 09:40:12.122875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.818 [2024-07-15 09:40:12.260839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.077 [2024-07-15 09:40:12.308426] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.643 09:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.643 09:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:18.643 09:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:18.643 09:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:18.922 09:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.922 09:40:13 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:18.922 Running I/O for 1 seconds... 00:14:19.857 00:14:19.857 Latency(us) 00:14:19.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.857 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:19.857 Verification LBA range: start 0x0 length 0x2000 00:14:19.857 nvme0n1 : 1.02 3866.93 15.11 0.00 0.00 32719.59 7626.01 33363.78 00:14:19.857 =================================================================================================================== 00:14:19.857 Total : 3866.93 15.11 0.00 0.00 32719.59 7626.01 33363.78 00:14:19.857 0 00:14:19.857 09:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:19.857 09:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:19.857 09:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:19.857 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:20.116 nvmf_trace.0 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74365 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74365 ']' 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74365 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74365 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:20.116 killing process with pid 74365 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74365' 00:14:20.116 Received shutdown signal, test time was about 1.000000 seconds 00:14:20.116 00:14:20.116 Latency(us) 00:14:20.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.116 =================================================================================================================== 00:14:20.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74365 00:14:20.116 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74365 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.374 rmmod nvme_tcp 00:14:20.374 rmmod nvme_fabrics 00:14:20.374 rmmod nvme_keyring 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74333 ']' 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74333 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74333 ']' 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74333 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74333 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:20.374 killing process with pid 74333 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74333' 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74333 00:14:20.374 09:40:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74333 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:20.632 09:40:15 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7nYAX2CnCW /tmp/tmp.cZG6vP4sA3 /tmp/tmp.iUSK22Zx90 00:14:20.890 00:14:20.891 real 1m27.254s 00:14:20.891 user 2m19.085s 00:14:20.891 sys 0m27.716s 00:14:20.891 09:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.891 09:40:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.891 ************************************ 00:14:20.891 END TEST nvmf_tls 00:14:20.891 ************************************ 00:14:20.891 09:40:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:20.891 09:40:15 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:20.891 09:40:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:20.891 09:40:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.891 09:40:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.891 ************************************ 00:14:20.891 START TEST nvmf_fips 00:14:20.891 ************************************ 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:20.891 * Looking for test storage... 00:14:20.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:20.891 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:20.892 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:20.892 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:20.892 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:20.892 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:20.892 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:21.150 Error setting digest 00:14:21.150 0012BED2327F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:21.150 0012BED2327F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:21.150 Cannot find device "nvmf_tgt_br" 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.150 Cannot find device "nvmf_tgt_br2" 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:21.150 Cannot find device "nvmf_tgt_br" 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:21.150 Cannot find device "nvmf_tgt_br2" 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.150 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.150 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:21.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:21.408 00:14:21.408 --- 10.0.0.2 ping statistics --- 00:14:21.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.408 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:21.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:21.408 00:14:21.408 --- 10.0.0.3 ping statistics --- 00:14:21.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.408 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:21.408 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:21.409 00:14:21.409 --- 10.0.0.1 ping statistics --- 00:14:21.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.409 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74635 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74635 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74635 ']' 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.409 09:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:21.667 [2024-07-15 09:40:15.926495] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:21.667 [2024-07-15 09:40:15.926596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.667 [2024-07-15 09:40:16.063775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.927 [2024-07-15 09:40:16.202742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.927 [2024-07-15 09:40:16.202797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.927 [2024-07-15 09:40:16.202821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.927 [2024-07-15 09:40:16.202832] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.927 [2024-07-15 09:40:16.202841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.927 [2024-07-15 09:40:16.202877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.927 [2024-07-15 09:40:16.262100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.494 09:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.495 09:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:22.495 09:40:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.495 09:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.495 09:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:22.753 09:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:23.011 [2024-07-15 09:40:17.278792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.011 [2024-07-15 09:40:17.294718] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:23.011 [2024-07-15 09:40:17.294986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.011 [2024-07-15 09:40:17.326825] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:23.011 malloc0 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74670 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74670 /var/tmp/bdevperf.sock 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74670 ']' 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.011 09:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:23.011 [2024-07-15 09:40:17.443417] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:23.011 [2024-07-15 09:40:17.443516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74670 ] 00:14:23.270 [2024-07-15 09:40:17.581382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.270 [2024-07-15 09:40:17.705119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.529 [2024-07-15 09:40:17.762587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:24.096 09:40:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.096 09:40:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:24.096 09:40:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:24.355 [2024-07-15 09:40:18.610509] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.355 [2024-07-15 09:40:18.611330] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:24.355 TLSTESTn1 00:14:24.355 09:40:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.355 Running I/O for 10 seconds... 00:14:36.564 00:14:36.564 Latency(us) 00:14:36.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.564 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:36.564 Verification LBA range: start 0x0 length 0x2000 00:14:36.564 TLSTESTn1 : 10.02 3933.69 15.37 0.00 0.00 32473.95 8221.79 35031.97 00:14:36.564 =================================================================================================================== 00:14:36.564 Total : 3933.69 15.37 0.00 0.00 32473.95 8221.79 35031.97 00:14:36.564 0 00:14:36.564 09:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:36.564 09:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:36.564 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:14:36.564 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:14:36.564 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:36.564 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:36.565 nvmf_trace.0 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74670 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74670 ']' 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74670 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74670 00:14:36.565 killing process with pid 74670 00:14:36.565 Received shutdown signal, test time was about 10.000000 seconds 00:14:36.565 00:14:36.565 Latency(us) 00:14:36.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.565 =================================================================================================================== 00:14:36.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74670' 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74670 00:14:36.565 [2024-07-15 09:40:28.997495] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:36.565 09:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74670 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.565 rmmod nvme_tcp 00:14:36.565 rmmod nvme_fabrics 00:14:36.565 rmmod nvme_keyring 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74635 ']' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74635 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74635 ']' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74635 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74635 00:14:36.565 killing process with pid 74635 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74635' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74635 00:14:36.565 [2024-07-15 09:40:29.341645] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74635 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:36.565 00:14:36.565 real 0m14.571s 00:14:36.565 user 0m19.845s 00:14:36.565 sys 0m5.809s 00:14:36.565 ************************************ 00:14:36.565 END TEST nvmf_fips 00:14:36.565 ************************************ 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.565 09:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:36.565 09:40:29 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:36.565 09:40:29 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:36.565 09:40:29 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 09:40:29 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 09:40:29 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:36.565 09:40:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.565 09:40:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 ************************************ 00:14:36.565 START TEST nvmf_identify 00:14:36.565 ************************************ 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:36.565 * Looking for test storage... 00:14:36.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:36.565 09:40:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:36.566 Cannot find device "nvmf_tgt_br" 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.566 Cannot find device "nvmf_tgt_br2" 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:36.566 Cannot find device "nvmf_tgt_br" 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:36.566 09:40:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:36.566 Cannot find device "nvmf_tgt_br2" 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:36.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:14:36.566 00:14:36.566 --- 10.0.0.2 ping statistics --- 00:14:36.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.566 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:36.566 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.566 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:36.566 00:14:36.566 --- 10.0.0.3 ping statistics --- 00:14:36.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.566 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:14:36.566 00:14:36.566 --- 10.0.0.1 ping statistics --- 00:14:36.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.566 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:36.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=75024 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 75024 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 75024 ']' 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.566 09:40:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:36.566 [2024-07-15 09:40:30.412427] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:36.566 [2024-07-15 09:40:30.412724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.566 [2024-07-15 09:40:30.552557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.566 [2024-07-15 09:40:30.710325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.566 [2024-07-15 09:40:30.710974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.566 [2024-07-15 09:40:30.711193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.566 [2024-07-15 09:40:30.711293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.566 [2024-07-15 09:40:30.711358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.567 [2024-07-15 09:40:30.711585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.567 [2024-07-15 09:40:30.712044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.567 [2024-07-15 09:40:30.712271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.567 [2024-07-15 09:40:30.712327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.567 [2024-07-15 09:40:30.791197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:37.132 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.132 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:14:37.132 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 [2024-07-15 09:40:31.413269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 Malloc0 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 [2024-07-15 09:40:31.524265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.133 [ 00:14:37.133 { 00:14:37.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:37.133 "subtype": "Discovery", 00:14:37.133 "listen_addresses": [ 00:14:37.133 { 00:14:37.133 "trtype": "TCP", 00:14:37.133 "adrfam": "IPv4", 00:14:37.133 "traddr": "10.0.0.2", 00:14:37.133 "trsvcid": "4420" 00:14:37.133 } 00:14:37.133 ], 00:14:37.133 "allow_any_host": true, 00:14:37.133 "hosts": [] 00:14:37.133 }, 00:14:37.133 { 00:14:37.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.133 "subtype": "NVMe", 00:14:37.133 "listen_addresses": [ 00:14:37.133 { 00:14:37.133 "trtype": "TCP", 00:14:37.133 "adrfam": "IPv4", 00:14:37.133 "traddr": "10.0.0.2", 00:14:37.133 "trsvcid": "4420" 00:14:37.133 } 00:14:37.133 ], 00:14:37.133 "allow_any_host": true, 00:14:37.133 "hosts": [], 00:14:37.133 "serial_number": "SPDK00000000000001", 00:14:37.133 "model_number": "SPDK bdev Controller", 00:14:37.133 "max_namespaces": 32, 00:14:37.133 "min_cntlid": 1, 00:14:37.133 "max_cntlid": 65519, 00:14:37.133 "namespaces": [ 00:14:37.133 { 00:14:37.133 "nsid": 1, 00:14:37.133 "bdev_name": "Malloc0", 00:14:37.133 "name": "Malloc0", 00:14:37.133 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:37.133 "eui64": "ABCDEF0123456789", 00:14:37.133 "uuid": "bcba1db1-a34d-4dd8-a0b9-d388e753c7d0" 00:14:37.133 } 00:14:37.133 ] 00:14:37.133 } 00:14:37.133 ] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.133 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:37.133 [2024-07-15 09:40:31.582641] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:37.133 [2024-07-15 09:40:31.582739] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75059 ] 00:14:37.394 [2024-07-15 09:40:31.734641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:37.394 [2024-07-15 09:40:31.734754] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:37.394 [2024-07-15 09:40:31.734763] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:37.394 [2024-07-15 09:40:31.734783] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:37.394 [2024-07-15 09:40:31.734794] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:37.394 [2024-07-15 09:40:31.738964] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:37.394 [2024-07-15 09:40:31.739049] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9922c0 0 00:14:37.394 [2024-07-15 09:40:31.745931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:37.394 [2024-07-15 09:40:31.745962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:37.394 [2024-07-15 09:40:31.745970] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:37.394 [2024-07-15 09:40:31.745974] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:37.394 [2024-07-15 09:40:31.746038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.746046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.746051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.746070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:37.395 [2024-07-15 09:40:31.746105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.753920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.753947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.753953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.753959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.753973] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:37.395 [2024-07-15 09:40:31.753985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:37.395 [2024-07-15 09:40:31.753993] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:37.395 [2024-07-15 09:40:31.754016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.754038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.395 [2024-07-15 09:40:31.754068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.754164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.754171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.754175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.754186] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:37.395 [2024-07-15 09:40:31.754194] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:37.395 [2024-07-15 09:40:31.754203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.754219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.395 [2024-07-15 09:40:31.754239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.754305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.754312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.754316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.754327] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:37.395 [2024-07-15 09:40:31.754337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:37.395 [2024-07-15 09:40:31.754345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.754361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.395 [2024-07-15 09:40:31.754379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.754441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.754448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.754452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.754463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:37.395 [2024-07-15 09:40:31.754473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.754490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.395 [2024-07-15 09:40:31.754507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.754575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.754582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.754586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.754596] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:37.395 [2024-07-15 09:40:31.754602] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:37.395 [2024-07-15 09:40:31.754610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:37.395 [2024-07-15 09:40:31.754717] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:37.395 [2024-07-15 09:40:31.754722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:37.395 [2024-07-15 09:40:31.754733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.754750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.395 [2024-07-15 09:40:31.754769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.754837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.754844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.754848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.754858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:37.395 [2024-07-15 09:40:31.754869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.754878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.754885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.395 [2024-07-15 09:40:31.754918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.754991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.754998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.755002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.755012] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:37.395 [2024-07-15 09:40:31.755018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:37.395 [2024-07-15 09:40:31.755027] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:37.395 [2024-07-15 09:40:31.755038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:37.395 [2024-07-15 09:40:31.755051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.395 [2024-07-15 09:40:31.755064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.395 [2024-07-15 09:40:31.755083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.395 [2024-07-15 09:40:31.755201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.395 [2024-07-15 09:40:31.755209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.395 [2024-07-15 09:40:31.755213] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755218] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9922c0): datao=0, datal=4096, cccid=0 00:14:37.395 [2024-07-15 09:40:31.755224] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d3940) on tqpair(0x9922c0): expected_datao=0, payload_size=4096 00:14:37.395 [2024-07-15 09:40:31.755229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755239] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755245] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.395 [2024-07-15 09:40:31.755261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.395 [2024-07-15 09:40:31.755265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.395 [2024-07-15 09:40:31.755291] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:37.395 [2024-07-15 09:40:31.755297] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:37.395 [2024-07-15 09:40:31.755302] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:37.395 [2024-07-15 09:40:31.755308] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:37.395 [2024-07-15 09:40:31.755314] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:37.395 [2024-07-15 09:40:31.755319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:37.395 [2024-07-15 09:40:31.755329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:37.395 [2024-07-15 09:40:31.755337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.395 [2024-07-15 09:40:31.755346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.755354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:37.396 [2024-07-15 09:40:31.755374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.396 [2024-07-15 09:40:31.755460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.396 [2024-07-15 09:40:31.755468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.396 [2024-07-15 09:40:31.755472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.396 [2024-07-15 09:40:31.755486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.755501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.396 [2024-07-15 09:40:31.755509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.755523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.396 [2024-07-15 09:40:31.755531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.755545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.396 [2024-07-15 09:40:31.755552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.755566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.396 [2024-07-15 09:40:31.755571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:37.396 [2024-07-15 09:40:31.755586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:37.396 [2024-07-15 09:40:31.755594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.755606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.396 [2024-07-15 09:40:31.755627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3940, cid 0, qid 0 00:14:37.396 [2024-07-15 09:40:31.755634] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3ac0, cid 1, qid 0 00:14:37.396 [2024-07-15 09:40:31.755639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3c40, cid 2, qid 0 00:14:37.396 [2024-07-15 09:40:31.755644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.396 [2024-07-15 09:40:31.755649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3f40, cid 4, qid 0 00:14:37.396 [2024-07-15 09:40:31.755773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.396 [2024-07-15 09:40:31.755780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.396 [2024-07-15 09:40:31.755784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755789] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3f40) on tqpair=0x9922c0 00:14:37.396 [2024-07-15 09:40:31.755795] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:37.396 [2024-07-15 09:40:31.755805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:37.396 [2024-07-15 09:40:31.755818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.755831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.396 [2024-07-15 09:40:31.755849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3f40, cid 4, qid 0 00:14:37.396 [2024-07-15 09:40:31.755941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.396 [2024-07-15 09:40:31.755950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.396 [2024-07-15 09:40:31.755955] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755959] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9922c0): datao=0, datal=4096, cccid=4 00:14:37.396 [2024-07-15 09:40:31.755963] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d3f40) on tqpair(0x9922c0): expected_datao=0, payload_size=4096 00:14:37.396 [2024-07-15 09:40:31.755968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755976] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755980] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.755989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.396 [2024-07-15 09:40:31.755996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.396 [2024-07-15 09:40:31.755999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3f40) on tqpair=0x9922c0 00:14:37.396 [2024-07-15 09:40:31.756020] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:37.396 [2024-07-15 09:40:31.756059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.756074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.396 [2024-07-15 09:40:31.756083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.756098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.396 [2024-07-15 09:40:31.756125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3f40, cid 4, qid 0 00:14:37.396 [2024-07-15 09:40:31.756133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d40c0, cid 5, qid 0 00:14:37.396 [2024-07-15 09:40:31.756289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.396 [2024-07-15 09:40:31.756296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.396 [2024-07-15 09:40:31.756300] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756304] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9922c0): datao=0, datal=1024, cccid=4 00:14:37.396 [2024-07-15 09:40:31.756309] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d3f40) on tqpair(0x9922c0): expected_datao=0, payload_size=1024 00:14:37.396 [2024-07-15 09:40:31.756314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756321] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756325] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.396 [2024-07-15 09:40:31.756338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.396 [2024-07-15 09:40:31.756341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d40c0) on tqpair=0x9922c0 00:14:37.396 [2024-07-15 09:40:31.756364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.396 [2024-07-15 09:40:31.756372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.396 [2024-07-15 09:40:31.756376] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3f40) on tqpair=0x9922c0 00:14:37.396 [2024-07-15 09:40:31.756402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.756415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.396 [2024-07-15 09:40:31.756440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3f40, cid 4, qid 0 00:14:37.396 [2024-07-15 09:40:31.756531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.396 [2024-07-15 09:40:31.756538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.396 [2024-07-15 09:40:31.756542] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756546] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9922c0): datao=0, datal=3072, cccid=4 00:14:37.396 [2024-07-15 09:40:31.756551] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d3f40) on tqpair(0x9922c0): expected_datao=0, payload_size=3072 00:14:37.396 [2024-07-15 09:40:31.756555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756563] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756567] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.396 [2024-07-15 09:40:31.756582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.396 [2024-07-15 09:40:31.756586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3f40) on tqpair=0x9922c0 00:14:37.396 [2024-07-15 09:40:31.756602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.396 [2024-07-15 09:40:31.756607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9922c0) 00:14:37.396 [2024-07-15 09:40:31.756614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.396 [2024-07-15 09:40:31.756638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3f40, cid 4, qid 0 00:14:37.396 ===================================================== 00:14:37.396 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:37.396 ===================================================== 00:14:37.396 Controller Capabilities/Features 00:14:37.396 ================================ 00:14:37.396 Vendor ID: 0000 00:14:37.396 Subsystem Vendor ID: 0000 00:14:37.396 Serial Number: .................... 00:14:37.396 Model Number: ........................................ 00:14:37.396 Firmware Version: 24.09 00:14:37.396 Recommended Arb Burst: 0 00:14:37.396 IEEE OUI Identifier: 00 00 00 00:14:37.396 Multi-path I/O 00:14:37.396 May have multiple subsystem ports: No 00:14:37.396 May have multiple controllers: No 00:14:37.396 Associated with SR-IOV VF: No 00:14:37.396 Max Data Transfer Size: 131072 00:14:37.396 Max Number of Namespaces: 0 00:14:37.396 Max Number of I/O Queues: 1024 00:14:37.397 NVMe Specification Version (VS): 1.3 00:14:37.397 NVMe Specification Version (Identify): 1.3 00:14:37.397 Maximum Queue Entries: 128 00:14:37.397 Contiguous Queues Required: Yes 00:14:37.397 Arbitration Mechanisms Supported 00:14:37.397 Weighted Round Robin: Not Supported 00:14:37.397 Vendor Specific: Not Supported 00:14:37.397 Reset Timeout: 15000 ms 00:14:37.397 Doorbell Stride: 4 bytes 00:14:37.397 NVM Subsystem Reset: Not Supported 00:14:37.397 Command Sets Supported 00:14:37.397 NVM Command Set: Supported 00:14:37.397 Boot Partition: Not Supported 00:14:37.397 Memory Page Size Minimum: 4096 bytes 00:14:37.397 Memory Page Size Maximum: 4096 bytes 00:14:37.397 Persistent Memory Region: Not Supported 00:14:37.397 Optional Asynchronous Events Supported 00:14:37.397 Namespace Attribute Notices: Not Supported 00:14:37.397 Firmware Activation Notices: Not Supported 00:14:37.397 ANA Change Notices: Not Supported 00:14:37.397 PLE Aggregate Log Change Notices: Not Supported 00:14:37.397 LBA Status Info Alert Notices: Not Supported 00:14:37.397 EGE Aggregate Log Change Notices: Not Supported 00:14:37.397 Normal NVM Subsystem Shutdown event: Not Supported 00:14:37.397 Zone Descriptor Change Notices: Not Supported 00:14:37.397 Discovery Log Change Notices: Supported 00:14:37.397 Controller Attributes 00:14:37.397 128-bit Host Identifier: Not Supported 00:14:37.397 Non-Operational Permissive Mode: Not Supported 00:14:37.397 NVM Sets: Not Supported 00:14:37.397 Read Recovery Levels: Not Supported 00:14:37.397 Endurance Groups: Not Supported 00:14:37.397 Predictable Latency Mode: Not Supported 00:14:37.397 Traffic Based Keep ALive: Not Supported 00:14:37.397 Namespace Granularity: Not Supported 00:14:37.397 SQ Associations: Not Supported 00:14:37.397 UUID List: Not Supported 00:14:37.397 Multi-Domain Subsystem: Not Supported 00:14:37.397 Fixed Capacity Management: Not Supported 00:14:37.397 Variable Capacity Management: Not Supported 00:14:37.397 Delete Endurance Group: Not Supported 00:14:37.397 Delete NVM Set: Not Supported 00:14:37.397 Extended LBA Formats Supported: Not Supported 00:14:37.397 Flexible Data Placement Supported: Not Supported 00:14:37.397 00:14:37.397 Controller Memory Buffer Support 00:14:37.397 ================================ 00:14:37.397 Supported: No 00:14:37.397 00:14:37.397 Persistent Memory Region Support 00:14:37.397 ================================ 00:14:37.397 Supported: No 00:14:37.397 00:14:37.397 Admin Command Set Attributes 00:14:37.397 ============================ 00:14:37.397 Security Send/Receive: Not Supported 00:14:37.397 Format NVM: Not Supported 00:14:37.397 Firmware Activate/Download: Not Supported 00:14:37.397 Namespace Management: Not Supported 00:14:37.397 Device Self-Test: Not Supported 00:14:37.397 Directives: Not Supported 00:14:37.397 NVMe-MI: Not Supported 00:14:37.397 Virtualization Management: Not Supported 00:14:37.397 Doorbell Buffer Config: Not Supported 00:14:37.397 Get LBA Status Capability: Not Supported 00:14:37.397 Command & Feature Lockdown Capability: Not Supported 00:14:37.397 Abort Command Limit: 1 00:14:37.397 Async Event Request Limit: 4 00:14:37.397 Number of Firmware Slots: N/A 00:14:37.397 Firmware Slot 1 Read-Only: N/A 00:14:37.397 Firm[2024-07-15 09:40:31.756724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.397 [2024-07-15 09:40:31.756731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.397 [2024-07-15 09:40:31.756735] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.397 [2024-07-15 09:40:31.756739] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9922c0): datao=0, datal=8, cccid=4 00:14:37.397 [2024-07-15 09:40:31.756744] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9d3f40) on tqpair(0x9922c0): expected_datao=0, payload_size=8 00:14:37.397 [2024-07-15 09:40:31.756748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.397 [2024-07-15 09:40:31.756755] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.397 [2024-07-15 09:40:31.756760] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.397 [2024-07-15 09:40:31.756775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.397 [2024-07-15 09:40:31.756783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.397 [2024-07-15 09:40:31.756787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.397 [2024-07-15 09:40:31.756791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3f40) on tqpair=0x9922c0 00:14:37.397 ware Activation Without Reset: N/A 00:14:37.397 Multiple Update Detection Support: N/A 00:14:37.397 Firmware Update Granularity: No Information Provided 00:14:37.397 Per-Namespace SMART Log: No 00:14:37.397 Asymmetric Namespace Access Log Page: Not Supported 00:14:37.397 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:37.397 Command Effects Log Page: Not Supported 00:14:37.397 Get Log Page Extended Data: Supported 00:14:37.397 Telemetry Log Pages: Not Supported 00:14:37.397 Persistent Event Log Pages: Not Supported 00:14:37.397 Supported Log Pages Log Page: May Support 00:14:37.397 Commands Supported & Effects Log Page: Not Supported 00:14:37.397 Feature Identifiers & Effects Log Page:May Support 00:14:37.397 NVMe-MI Commands & Effects Log Page: May Support 00:14:37.397 Data Area 4 for Telemetry Log: Not Supported 00:14:37.397 Error Log Page Entries Supported: 128 00:14:37.397 Keep Alive: Not Supported 00:14:37.397 00:14:37.397 NVM Command Set Attributes 00:14:37.397 ========================== 00:14:37.397 Submission Queue Entry Size 00:14:37.397 Max: 1 00:14:37.397 Min: 1 00:14:37.397 Completion Queue Entry Size 00:14:37.397 Max: 1 00:14:37.397 Min: 1 00:14:37.397 Number of Namespaces: 0 00:14:37.397 Compare Command: Not Supported 00:14:37.397 Write Uncorrectable Command: Not Supported 00:14:37.397 Dataset Management Command: Not Supported 00:14:37.397 Write Zeroes Command: Not Supported 00:14:37.397 Set Features Save Field: Not Supported 00:14:37.397 Reservations: Not Supported 00:14:37.397 Timestamp: Not Supported 00:14:37.397 Copy: Not Supported 00:14:37.397 Volatile Write Cache: Not Present 00:14:37.397 Atomic Write Unit (Normal): 1 00:14:37.397 Atomic Write Unit (PFail): 1 00:14:37.397 Atomic Compare & Write Unit: 1 00:14:37.397 Fused Compare & Write: Supported 00:14:37.397 Scatter-Gather List 00:14:37.397 SGL Command Set: Supported 00:14:37.397 SGL Keyed: Supported 00:14:37.397 SGL Bit Bucket Descriptor: Not Supported 00:14:37.397 SGL Metadata Pointer: Not Supported 00:14:37.397 Oversized SGL: Not Supported 00:14:37.397 SGL Metadata Address: Not Supported 00:14:37.397 SGL Offset: Supported 00:14:37.397 Transport SGL Data Block: Not Supported 00:14:37.397 Replay Protected Memory Block: Not Supported 00:14:37.397 00:14:37.397 Firmware Slot Information 00:14:37.397 ========================= 00:14:37.397 Active slot: 0 00:14:37.397 00:14:37.397 00:14:37.397 Error Log 00:14:37.397 ========= 00:14:37.397 00:14:37.397 Active Namespaces 00:14:37.397 ================= 00:14:37.397 Discovery Log Page 00:14:37.397 ================== 00:14:37.397 Generation Counter: 2 00:14:37.397 Number of Records: 2 00:14:37.397 Record Format: 0 00:14:37.397 00:14:37.397 Discovery Log Entry 0 00:14:37.397 ---------------------- 00:14:37.397 Transport Type: 3 (TCP) 00:14:37.397 Address Family: 1 (IPv4) 00:14:37.397 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:37.397 Entry Flags: 00:14:37.397 Duplicate Returned Information: 1 00:14:37.397 Explicit Persistent Connection Support for Discovery: 1 00:14:37.397 Transport Requirements: 00:14:37.397 Secure Channel: Not Required 00:14:37.397 Port ID: 0 (0x0000) 00:14:37.397 Controller ID: 65535 (0xffff) 00:14:37.397 Admin Max SQ Size: 128 00:14:37.397 Transport Service Identifier: 4420 00:14:37.397 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:37.397 Transport Address: 10.0.0.2 00:14:37.397 Discovery Log Entry 1 00:14:37.397 ---------------------- 00:14:37.397 Transport Type: 3 (TCP) 00:14:37.397 Address Family: 1 (IPv4) 00:14:37.397 Subsystem Type: 2 (NVM Subsystem) 00:14:37.397 Entry Flags: 00:14:37.397 Duplicate Returned Information: 0 00:14:37.397 Explicit Persistent Connection Support for Discovery: 0 00:14:37.397 Transport Requirements: 00:14:37.397 Secure Channel: Not Required 00:14:37.397 Port ID: 0 (0x0000) 00:14:37.397 Controller ID: 65535 (0xffff) 00:14:37.397 Admin Max SQ Size: 128 00:14:37.397 Transport Service Identifier: 4420 00:14:37.397 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:37.397 Transport Address: 10.0.0.2 [2024-07-15 09:40:31.756923] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:37.397 [2024-07-15 09:40:31.756940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3940) on tqpair=0x9922c0 00:14:37.397 [2024-07-15 09:40:31.756949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.397 [2024-07-15 09:40:31.756955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3ac0) on tqpair=0x9922c0 00:14:37.397 [2024-07-15 09:40:31.756960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.397 [2024-07-15 09:40:31.756966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3c40) on tqpair=0x9922c0 00:14:37.397 [2024-07-15 09:40:31.756971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.397 [2024-07-15 09:40:31.756976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.397 [2024-07-15 09:40:31.756981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.398 [2024-07-15 09:40:31.756992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.756997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.757009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.757036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.757108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.757115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.757119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.757145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757150] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.757162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.757186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.757287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.757294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.757298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.757309] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:37.398 [2024-07-15 09:40:31.757315] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:37.398 [2024-07-15 09:40:31.757326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.757342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.757360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.757430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.757437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.757441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.757457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.757473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.757491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.757546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.757553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.757557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.757572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.757588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.757606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.757675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.757682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.757686] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.757700] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.757717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.757734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.757806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.757813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.757817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.757832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.757841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.757848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.757865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.761914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.761934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.761939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.761944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.761958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.761964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.761968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9922c0) 00:14:37.398 [2024-07-15 09:40:31.761977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.398 [2024-07-15 09:40:31.762002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9d3dc0, cid 3, qid 0 00:14:37.398 [2024-07-15 09:40:31.762077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.398 [2024-07-15 09:40:31.762084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.398 [2024-07-15 09:40:31.762088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.398 [2024-07-15 09:40:31.762092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9d3dc0) on tqpair=0x9922c0 00:14:37.398 [2024-07-15 09:40:31.762101] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:14:37.398 00:14:37.398 09:40:31 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:37.398 [2024-07-15 09:40:31.809556] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:37.398 [2024-07-15 09:40:31.809643] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75062 ] 00:14:37.660 [2024-07-15 09:40:31.961553] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:37.660 [2024-07-15 09:40:31.961642] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:37.660 [2024-07-15 09:40:31.961650] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:37.660 [2024-07-15 09:40:31.961667] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:37.660 [2024-07-15 09:40:31.961677] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:37.660 [2024-07-15 09:40:31.961869] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:37.660 [2024-07-15 09:40:31.961948] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11092c0 0 00:14:37.660 [2024-07-15 09:40:31.968918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:37.660 [2024-07-15 09:40:31.968943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:37.660 [2024-07-15 09:40:31.968951] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:37.660 [2024-07-15 09:40:31.968955] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:37.660 [2024-07-15 09:40:31.969014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.969022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.969026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.660 [2024-07-15 09:40:31.969044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:37.660 [2024-07-15 09:40:31.969078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.660 [2024-07-15 09:40:31.976916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.660 [2024-07-15 09:40:31.976937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.660 [2024-07-15 09:40:31.976942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.976948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.660 [2024-07-15 09:40:31.976964] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:37.660 [2024-07-15 09:40:31.976973] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:37.660 [2024-07-15 09:40:31.976981] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:37.660 [2024-07-15 09:40:31.977001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.660 [2024-07-15 09:40:31.977020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.660 [2024-07-15 09:40:31.977048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.660 [2024-07-15 09:40:31.977149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.660 [2024-07-15 09:40:31.977157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.660 [2024-07-15 09:40:31.977161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.660 [2024-07-15 09:40:31.977172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:37.660 [2024-07-15 09:40:31.977180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:37.660 [2024-07-15 09:40:31.977189] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.660 [2024-07-15 09:40:31.977205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.660 [2024-07-15 09:40:31.977226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.660 [2024-07-15 09:40:31.977279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.660 [2024-07-15 09:40:31.977286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.660 [2024-07-15 09:40:31.977290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.660 [2024-07-15 09:40:31.977301] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:37.660 [2024-07-15 09:40:31.977310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:37.660 [2024-07-15 09:40:31.977318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.660 [2024-07-15 09:40:31.977326] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.660 [2024-07-15 09:40:31.977333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.660 [2024-07-15 09:40:31.977352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.661 [2024-07-15 09:40:31.977414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.977420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.977424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.977435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:37.661 [2024-07-15 09:40:31.977445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.977462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.661 [2024-07-15 09:40:31.977480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.661 [2024-07-15 09:40:31.977532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.977539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.977543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.977553] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:37.661 [2024-07-15 09:40:31.977559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:37.661 [2024-07-15 09:40:31.977567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:37.661 [2024-07-15 09:40:31.977674] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:37.661 [2024-07-15 09:40:31.977687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:37.661 [2024-07-15 09:40:31.977699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.977715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.661 [2024-07-15 09:40:31.977736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.661 [2024-07-15 09:40:31.977794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.977801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.977805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.977815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:37.661 [2024-07-15 09:40:31.977826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.977843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.661 [2024-07-15 09:40:31.977861] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.661 [2024-07-15 09:40:31.977927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.977935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.977939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.977949] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:37.661 [2024-07-15 09:40:31.977955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.977963] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:37.661 [2024-07-15 09:40:31.977976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.977988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.977994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.661 [2024-07-15 09:40:31.978024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.661 [2024-07-15 09:40:31.978143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.661 [2024-07-15 09:40:31.978150] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.661 [2024-07-15 09:40:31.978154] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978159] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=4096, cccid=0 00:14:37.661 [2024-07-15 09:40:31.978164] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114a940) on tqpair(0x11092c0): expected_datao=0, payload_size=4096 00:14:37.661 [2024-07-15 09:40:31.978169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978179] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978184] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.978200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.978204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.978218] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:37.661 [2024-07-15 09:40:31.978223] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:37.661 [2024-07-15 09:40:31.978228] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:37.661 [2024-07-15 09:40:31.978234] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:37.661 [2024-07-15 09:40:31.978239] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:37.661 [2024-07-15 09:40:31.978244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:37.661 [2024-07-15 09:40:31.978299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.661 [2024-07-15 09:40:31.978363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.978370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.978374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.978388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.661 [2024-07-15 09:40:31.978410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.661 [2024-07-15 09:40:31.978430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.661 [2024-07-15 09:40:31.978451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.661 [2024-07-15 09:40:31.978470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.661 [2024-07-15 09:40:31.978526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114a940, cid 0, qid 0 00:14:37.661 [2024-07-15 09:40:31.978533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114aac0, cid 1, qid 0 00:14:37.661 [2024-07-15 09:40:31.978538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114ac40, cid 2, qid 0 00:14:37.661 [2024-07-15 09:40:31.978543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.661 [2024-07-15 09:40:31.978548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114af40, cid 4, qid 0 00:14:37.661 [2024-07-15 09:40:31.978645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.978652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.978656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114af40) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.978666] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:37.661 [2024-07-15 09:40:31.978676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:37.661 [2024-07-15 09:40:31.978736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114af40, cid 4, qid 0 00:14:37.661 [2024-07-15 09:40:31.978804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.978811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.978815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114af40) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.978883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.978918] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.978923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.978931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.661 [2024-07-15 09:40:31.978953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114af40, cid 4, qid 0 00:14:37.661 [2024-07-15 09:40:31.979034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.661 [2024-07-15 09:40:31.979041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.661 [2024-07-15 09:40:31.979045] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979049] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=4096, cccid=4 00:14:37.661 [2024-07-15 09:40:31.979053] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114af40) on tqpair(0x11092c0): expected_datao=0, payload_size=4096 00:14:37.661 [2024-07-15 09:40:31.979058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979066] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979071] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.661 [2024-07-15 09:40:31.979086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.661 [2024-07-15 09:40:31.979090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979094] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114af40) on tqpair=0x11092c0 00:14:37.661 [2024-07-15 09:40:31.979112] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:37.661 [2024-07-15 09:40:31.979125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.979137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:37.661 [2024-07-15 09:40:31.979145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11092c0) 00:14:37.661 [2024-07-15 09:40:31.979158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.661 [2024-07-15 09:40:31.979178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114af40, cid 4, qid 0 00:14:37.661 [2024-07-15 09:40:31.979258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.661 [2024-07-15 09:40:31.979265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.661 [2024-07-15 09:40:31.979269] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979273] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=4096, cccid=4 00:14:37.661 [2024-07-15 09:40:31.979278] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114af40) on tqpair(0x11092c0): expected_datao=0, payload_size=4096 00:14:37.661 [2024-07-15 09:40:31.979283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.661 [2024-07-15 09:40:31.979290] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979294] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.979309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.979313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114af40) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.979346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.979380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.979401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114af40, cid 4, qid 0 00:14:37.662 [2024-07-15 09:40:31.979468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.662 [2024-07-15 09:40:31.979475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.662 [2024-07-15 09:40:31.979479] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979483] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=4096, cccid=4 00:14:37.662 [2024-07-15 09:40:31.979487] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114af40) on tqpair(0x11092c0): expected_datao=0, payload_size=4096 00:14:37.662 [2024-07-15 09:40:31.979492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979499] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979504] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.979519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.979523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114af40) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.979536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979577] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979583] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:37.662 [2024-07-15 09:40:31.979588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:37.662 [2024-07-15 09:40:31.979594] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:37.662 [2024-07-15 09:40:31.979615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.979627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.979635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.979649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:37.662 [2024-07-15 09:40:31.979676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114af40, cid 4, qid 0 00:14:37.662 [2024-07-15 09:40:31.979683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b0c0, cid 5, qid 0 00:14:37.662 [2024-07-15 09:40:31.979760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.979767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.979771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114af40) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.979782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.979788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.979792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b0c0) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.979807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.979819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.979838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b0c0, cid 5, qid 0 00:14:37.662 [2024-07-15 09:40:31.979909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.979917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.979921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b0c0) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.979937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.979942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.979950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.979970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b0c0, cid 5, qid 0 00:14:37.662 [2024-07-15 09:40:31.980025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.980032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.980036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b0c0) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.980051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.980063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.980081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b0c0, cid 5, qid 0 00:14:37.662 [2024-07-15 09:40:31.980132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.980151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.980156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b0c0) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.980182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.980196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.980204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.980215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.980223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.980234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.980247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980252] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11092c0) 00:14:37.662 [2024-07-15 09:40:31.980259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.662 [2024-07-15 09:40:31.980281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b0c0, cid 5, qid 0 00:14:37.662 [2024-07-15 09:40:31.980288] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114af40, cid 4, qid 0 00:14:37.662 [2024-07-15 09:40:31.980293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b240, cid 6, qid 0 00:14:37.662 [2024-07-15 09:40:31.980298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b3c0, cid 7, qid 0 00:14:37.662 [2024-07-15 09:40:31.980449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.662 [2024-07-15 09:40:31.980464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.662 [2024-07-15 09:40:31.980469] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980481] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=8192, cccid=5 00:14:37.662 [2024-07-15 09:40:31.980486] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114b0c0) on tqpair(0x11092c0): expected_datao=0, payload_size=8192 00:14:37.662 [2024-07-15 09:40:31.980491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980513] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980518] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.662 [2024-07-15 09:40:31.980531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.662 [2024-07-15 09:40:31.980534] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980538] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=512, cccid=4 00:14:37.662 [2024-07-15 09:40:31.980543] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114af40) on tqpair(0x11092c0): expected_datao=0, payload_size=512 00:14:37.662 [2024-07-15 09:40:31.980548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980554] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980558] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.662 [2024-07-15 09:40:31.980570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.662 [2024-07-15 09:40:31.980574] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980578] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=512, cccid=6 00:14:37.662 [2024-07-15 09:40:31.980582] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114b240) on tqpair(0x11092c0): expected_datao=0, payload_size=512 00:14:37.662 [2024-07-15 09:40:31.980587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980594] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980598] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:37.662 [2024-07-15 09:40:31.980609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:37.662 [2024-07-15 09:40:31.980613] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980617] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11092c0): datao=0, datal=4096, cccid=7 00:14:37.662 [2024-07-15 09:40:31.980622] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114b3c0) on tqpair(0x11092c0): expected_datao=0, payload_size=4096 00:14:37.662 [2024-07-15 09:40:31.980626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980633] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980637] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.980652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.980656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b0c0) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.980680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.980687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.980691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114af40) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.980710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 ===================================================== 00:14:37.662 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.662 ===================================================== 00:14:37.662 Controller Capabilities/Features 00:14:37.662 ================================ 00:14:37.662 Vendor ID: 8086 00:14:37.662 Subsystem Vendor ID: 8086 00:14:37.662 Serial Number: SPDK00000000000001 00:14:37.662 Model Number: SPDK bdev Controller 00:14:37.662 Firmware Version: 24.09 00:14:37.662 Recommended Arb Burst: 6 00:14:37.662 IEEE OUI Identifier: e4 d2 5c 00:14:37.662 Multi-path I/O 00:14:37.662 May have multiple subsystem ports: Yes 00:14:37.662 May have multiple controllers: Yes 00:14:37.662 Associated with SR-IOV VF: No 00:14:37.662 Max Data Transfer Size: 131072 00:14:37.662 Max Number of Namespaces: 32 00:14:37.662 Max Number of I/O Queues: 127 00:14:37.662 NVMe Specification Version (VS): 1.3 00:14:37.662 NVMe Specification Version (Identify): 1.3 00:14:37.662 Maximum Queue Entries: 128 00:14:37.662 Contiguous Queues Required: Yes 00:14:37.662 Arbitration Mechanisms Supported 00:14:37.662 Weighted Round Robin: Not Supported 00:14:37.662 Vendor Specific: Not Supported 00:14:37.662 Reset Timeout: 15000 ms 00:14:37.662 Doorbell Stride: 4 bytes 00:14:37.662 NVM Subsystem Reset: Not Supported 00:14:37.662 Command Sets Supported 00:14:37.662 NVM Command Set: Supported 00:14:37.662 Boot Partition: Not Supported 00:14:37.662 Memory Page Size Minimum: 4096 bytes 00:14:37.662 Memory Page Size Maximum: 4096 bytes 00:14:37.662 Persistent Memory Region: Not Supported 00:14:37.662 Optional Asynchronous Events Supported 00:14:37.662 Namespace Attribute Notices: Supported 00:14:37.662 Firmware Activation Notices: Not Supported 00:14:37.662 ANA Change Notices: Not Supported 00:14:37.662 PLE Aggregate Log Change Notices: Not Supported 00:14:37.662 LBA Status Info Alert Notices: Not Supported 00:14:37.662 EGE Aggregate Log Change Notices: Not Supported 00:14:37.662 Normal NVM Subsystem Shutdown event: Not Supported 00:14:37.662 Zone Descriptor Change Notices: Not Supported 00:14:37.662 Discovery Log Change Notices: Not Supported 00:14:37.662 Controller Attributes 00:14:37.662 128-bit Host Identifier: Supported 00:14:37.662 Non-Operational Permissive Mode: Not Supported 00:14:37.662 NVM Sets: Not Supported 00:14:37.662 Read Recovery Levels: Not Supported 00:14:37.662 Endurance Groups: Not Supported 00:14:37.662 Predictable Latency Mode: Not Supported 00:14:37.662 Traffic Based Keep ALive: Not Supported 00:14:37.662 Namespace Granularity: Not Supported 00:14:37.662 SQ Associations: Not Supported 00:14:37.662 UUID List: Not Supported 00:14:37.662 Multi-Domain Subsystem: Not Supported 00:14:37.662 Fixed Capacity Management: Not Supported 00:14:37.662 Variable Capacity Management: Not Supported 00:14:37.662 Delete Endurance Group: Not Supported 00:14:37.662 Delete NVM Set: Not Supported 00:14:37.662 Extended LBA Formats Supported: Not Supported 00:14:37.662 Flexible Data Placement Supported: Not Supported 00:14:37.662 00:14:37.662 Controller Memory Buffer Support 00:14:37.662 ================================ 00:14:37.662 Supported: No 00:14:37.662 00:14:37.662 Persistent Memory Region Support 00:14:37.662 ================================ 00:14:37.662 Supported: No 00:14:37.662 00:14:37.662 Admin Command Set Attributes 00:14:37.662 ============================ 00:14:37.662 Security Send/Receive: Not Supported 00:14:37.662 Format NVM: Not Supported 00:14:37.662 Firmware Activate/Download: Not Supported 00:14:37.662 Namespace Management: Not Supported 00:14:37.662 Device Self-Test: Not Supported 00:14:37.662 Directives: Not Supported 00:14:37.662 NVMe-MI: Not Supported 00:14:37.662 Virtualization Management: Not Supported 00:14:37.662 Doorbell Buffer Config: Not Supported 00:14:37.662 Get LBA Status Capability: Not Supported 00:14:37.662 Command & Feature Lockdown Capability: Not Supported 00:14:37.662 Abort Command Limit: 4 00:14:37.662 Async Event Request Limit: 4 00:14:37.662 Number of Firmware Slots: N/A 00:14:37.662 Firmware Slot 1 Read-Only: N/A 00:14:37.662 Firmware Activation Without Reset: [2024-07-15 09:40:31.980717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.980721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b240) on tqpair=0x11092c0 00:14:37.662 [2024-07-15 09:40:31.980732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.662 [2024-07-15 09:40:31.980738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.662 [2024-07-15 09:40:31.980742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.662 [2024-07-15 09:40:31.980746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b3c0) on tqpair=0x11092c0 00:14:37.662 N/A 00:14:37.662 Multiple Update Detection Support: N/A 00:14:37.662 Firmware Update Granularity: No Information Provided 00:14:37.662 Per-Namespace SMART Log: No 00:14:37.662 Asymmetric Namespace Access Log Page: Not Supported 00:14:37.662 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:37.662 Command Effects Log Page: Supported 00:14:37.662 Get Log Page Extended Data: Supported 00:14:37.662 Telemetry Log Pages: Not Supported 00:14:37.662 Persistent Event Log Pages: Not Supported 00:14:37.662 Supported Log Pages Log Page: May Support 00:14:37.662 Commands Supported & Effects Log Page: Not Supported 00:14:37.662 Feature Identifiers & Effects Log Page:May Support 00:14:37.662 NVMe-MI Commands & Effects Log Page: May Support 00:14:37.662 Data Area 4 for Telemetry Log: Not Supported 00:14:37.662 Error Log Page Entries Supported: 128 00:14:37.662 Keep Alive: Supported 00:14:37.662 Keep Alive Granularity: 10000 ms 00:14:37.662 00:14:37.662 NVM Command Set Attributes 00:14:37.662 ========================== 00:14:37.662 Submission Queue Entry Size 00:14:37.662 Max: 64 00:14:37.662 Min: 64 00:14:37.662 Completion Queue Entry Size 00:14:37.662 Max: 16 00:14:37.662 Min: 16 00:14:37.662 Number of Namespaces: 32 00:14:37.662 Compare Command: Supported 00:14:37.662 Write Uncorrectable Command: Not Supported 00:14:37.662 Dataset Management Command: Supported 00:14:37.662 Write Zeroes Command: Supported 00:14:37.663 Set Features Save Field: Not Supported 00:14:37.663 Reservations: Supported 00:14:37.663 Timestamp: Not Supported 00:14:37.663 Copy: Supported 00:14:37.663 Volatile Write Cache: Present 00:14:37.663 Atomic Write Unit (Normal): 1 00:14:37.663 Atomic Write Unit (PFail): 1 00:14:37.663 Atomic Compare & Write Unit: 1 00:14:37.663 Fused Compare & Write: Supported 00:14:37.663 Scatter-Gather List 00:14:37.663 SGL Command Set: Supported 00:14:37.663 SGL Keyed: Supported 00:14:37.663 SGL Bit Bucket Descriptor: Not Supported 00:14:37.663 SGL Metadata Pointer: Not Supported 00:14:37.663 Oversized SGL: Not Supported 00:14:37.663 SGL Metadata Address: Not Supported 00:14:37.663 SGL Offset: Supported 00:14:37.663 Transport SGL Data Block: Not Supported 00:14:37.663 Replay Protected Memory Block: Not Supported 00:14:37.663 00:14:37.663 Firmware Slot Information 00:14:37.663 ========================= 00:14:37.663 Active slot: 1 00:14:37.663 Slot 1 Firmware Revision: 24.09 00:14:37.663 00:14:37.663 00:14:37.663 Commands Supported and Effects 00:14:37.663 ============================== 00:14:37.663 Admin Commands 00:14:37.663 -------------- 00:14:37.663 Get Log Page (02h): Supported 00:14:37.663 Identify (06h): Supported 00:14:37.663 Abort (08h): Supported 00:14:37.663 Set Features (09h): Supported 00:14:37.663 Get Features (0Ah): Supported 00:14:37.663 Asynchronous Event Request (0Ch): Supported 00:14:37.663 Keep Alive (18h): Supported 00:14:37.663 I/O Commands 00:14:37.663 ------------ 00:14:37.663 Flush (00h): Supported LBA-Change 00:14:37.663 Write (01h): Supported LBA-Change 00:14:37.663 Read (02h): Supported 00:14:37.663 Compare (05h): Supported 00:14:37.663 Write Zeroes (08h): Supported LBA-Change 00:14:37.663 Dataset Management (09h): Supported LBA-Change 00:14:37.663 Copy (19h): Supported LBA-Change 00:14:37.663 00:14:37.663 Error Log 00:14:37.663 ========= 00:14:37.663 00:14:37.663 Arbitration 00:14:37.663 =========== 00:14:37.663 Arbitration Burst: 1 00:14:37.663 00:14:37.663 Power Management 00:14:37.663 ================ 00:14:37.663 Number of Power States: 1 00:14:37.663 Current Power State: Power State #0 00:14:37.663 Power State #0: 00:14:37.663 Max Power: 0.00 W 00:14:37.663 Non-Operational State: Operational 00:14:37.663 Entry Latency: Not Reported 00:14:37.663 Exit Latency: Not Reported 00:14:37.663 Relative Read Throughput: 0 00:14:37.663 Relative Read Latency: 0 00:14:37.663 Relative Write Throughput: 0 00:14:37.663 Relative Write Latency: 0 00:14:37.663 Idle Power: Not Reported 00:14:37.663 Active Power: Not Reported 00:14:37.663 Non-Operational Permissive Mode: Not Supported 00:14:37.663 00:14:37.663 Health Information 00:14:37.663 ================== 00:14:37.663 Critical Warnings: 00:14:37.663 Available Spare Space: OK 00:14:37.663 Temperature: OK 00:14:37.663 Device Reliability: OK 00:14:37.663 Read Only: No 00:14:37.663 Volatile Memory Backup: OK 00:14:37.663 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:37.663 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:37.663 Available Spare: 0% 00:14:37.663 Available Spare Threshold: 0% 00:14:37.663 Life Percentage Used:[2024-07-15 09:40:31.980868] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.980875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.980883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.980921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114b3c0, cid 7, qid 0 00:14:37.663 [2024-07-15 09:40:31.980981] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.980989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.980993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.980997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114b3c0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981041] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:37.663 [2024-07-15 09:40:31.981054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114a940) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.663 [2024-07-15 09:40:31.981069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114aac0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.663 [2024-07-15 09:40:31.981079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114ac40) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.663 [2024-07-15 09:40:31.981089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:37.663 [2024-07-15 09:40:31.981104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.981130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.981156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.981210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.981217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.981221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.981250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.981272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.981364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.981378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.981383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981393] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:37.663 [2024-07-15 09:40:31.981399] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:37.663 [2024-07-15 09:40:31.981409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981414] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.981426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.981446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.981496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.981507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.981512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.981544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.981563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.981613] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.981623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.981628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.981671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.981689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.981738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.981745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.981749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.981779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.981797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.981849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.981855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.981859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.981874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.981883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.981890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.981923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.981991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.981998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.982033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.982051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.982107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.982114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.982149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.982167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.982221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.982228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.982263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.982281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.982338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.982345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.982380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.982398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.982464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.982471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.982506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.982524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.982575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.982582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.982617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.982635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.982697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.982703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.663 [2024-07-15 09:40:31.982738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.663 [2024-07-15 09:40:31.982756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.663 [2024-07-15 09:40:31.982805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.663 [2024-07-15 09:40:31.982811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.663 [2024-07-15 09:40:31.982815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.663 [2024-07-15 09:40:31.982830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.663 [2024-07-15 09:40:31.982835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.982839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.982847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.982865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.982935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.982944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.982948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.982952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.982963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.982968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.982972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.982979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.983871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.983875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.983889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.983909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.983916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.983937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.983999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.984111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.984223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.984332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.984445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984470] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.984553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.984661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984686] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.984789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.984796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.984800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.984815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.984824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.984831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.984849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.988915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.988936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.988941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.988946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.988960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.988966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.988969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11092c0) 00:14:37.664 [2024-07-15 09:40:31.988978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:37.664 [2024-07-15 09:40:31.989004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114adc0, cid 3, qid 0 00:14:37.664 [2024-07-15 09:40:31.989074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:37.664 [2024-07-15 09:40:31.989081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:37.664 [2024-07-15 09:40:31.989085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:37.664 [2024-07-15 09:40:31.989089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x114adc0) on tqpair=0x11092c0 00:14:37.664 [2024-07-15 09:40:31.989098] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:37.664 0% 00:14:37.664 Data Units Read: 0 00:14:37.664 Data Units Written: 0 00:14:37.664 Host Read Commands: 0 00:14:37.664 Host Write Commands: 0 00:14:37.664 Controller Busy Time: 0 minutes 00:14:37.664 Power Cycles: 0 00:14:37.664 Power On Hours: 0 hours 00:14:37.664 Unsafe Shutdowns: 0 00:14:37.664 Unrecoverable Media Errors: 0 00:14:37.664 Lifetime Error Log Entries: 0 00:14:37.664 Warning Temperature Time: 0 minutes 00:14:37.664 Critical Temperature Time: 0 minutes 00:14:37.664 00:14:37.664 Number of Queues 00:14:37.664 ================ 00:14:37.664 Number of I/O Submission Queues: 127 00:14:37.664 Number of I/O Completion Queues: 127 00:14:37.664 00:14:37.664 Active Namespaces 00:14:37.664 ================= 00:14:37.664 Namespace ID:1 00:14:37.664 Error Recovery Timeout: Unlimited 00:14:37.664 Command Set Identifier: NVM (00h) 00:14:37.664 Deallocate: Supported 00:14:37.664 Deallocated/Unwritten Error: Not Supported 00:14:37.664 Deallocated Read Value: Unknown 00:14:37.664 Deallocate in Write Zeroes: Not Supported 00:14:37.664 Deallocated Guard Field: 0xFFFF 00:14:37.664 Flush: Supported 00:14:37.664 Reservation: Supported 00:14:37.664 Namespace Sharing Capabilities: Multiple Controllers 00:14:37.664 Size (in LBAs): 131072 (0GiB) 00:14:37.664 Capacity (in LBAs): 131072 (0GiB) 00:14:37.664 Utilization (in LBAs): 131072 (0GiB) 00:14:37.664 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:37.664 EUI64: ABCDEF0123456789 00:14:37.664 UUID: bcba1db1-a34d-4dd8-a0b9-d388e753c7d0 00:14:37.664 Thin Provisioning: Not Supported 00:14:37.664 Per-NS Atomic Units: Yes 00:14:37.664 Atomic Boundary Size (Normal): 0 00:14:37.664 Atomic Boundary Size (PFail): 0 00:14:37.664 Atomic Boundary Offset: 0 00:14:37.664 Maximum Single Source Range Length: 65535 00:14:37.664 Maximum Copy Length: 65535 00:14:37.664 Maximum Source Range Count: 1 00:14:37.664 NGUID/EUI64 Never Reused: No 00:14:37.664 Namespace Write Protected: No 00:14:37.664 Number of LBA Formats: 1 00:14:37.664 Current LBA Format: LBA Format #00 00:14:37.664 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:37.664 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.664 rmmod nvme_tcp 00:14:37.664 rmmod nvme_fabrics 00:14:37.664 rmmod nvme_keyring 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 75024 ']' 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 75024 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 75024 ']' 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 75024 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:14:37.664 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:37.922 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75024 00:14:37.922 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:37.922 killing process with pid 75024 00:14:37.922 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:37.922 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75024' 00:14:37.922 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 75024 00:14:37.922 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 75024 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:38.180 00:14:38.180 real 0m2.722s 00:14:38.180 user 0m7.200s 00:14:38.180 sys 0m0.745s 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.180 09:40:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:38.180 ************************************ 00:14:38.180 END TEST nvmf_identify 00:14:38.180 ************************************ 00:14:38.180 09:40:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:38.180 09:40:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:38.180 09:40:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.180 09:40:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.180 09:40:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.180 ************************************ 00:14:38.180 START TEST nvmf_perf 00:14:38.180 ************************************ 00:14:38.180 09:40:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:38.438 * Looking for test storage... 00:14:38.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:38.438 Cannot find device "nvmf_tgt_br" 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.438 Cannot find device "nvmf_tgt_br2" 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:38.438 Cannot find device "nvmf_tgt_br" 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:38.438 Cannot find device "nvmf_tgt_br2" 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:38.438 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.439 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.696 09:40:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.696 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.696 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.696 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.696 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:38.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:14:38.696 00:14:38.696 --- 10.0.0.2 ping statistics --- 00:14:38.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.696 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:14:38.696 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:38.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:38.696 00:14:38.696 --- 10.0.0.3 ping statistics --- 00:14:38.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.696 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:14:38.697 00:14:38.697 --- 10.0.0.1 ping statistics --- 00:14:38.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.697 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75227 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75227 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75227 ']' 00:14:38.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.697 09:40:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:38.697 [2024-07-15 09:40:33.137422] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:38.697 [2024-07-15 09:40:33.137852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.969 [2024-07-15 09:40:33.282203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.969 [2024-07-15 09:40:33.415610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.969 [2024-07-15 09:40:33.415914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.969 [2024-07-15 09:40:33.416171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.969 [2024-07-15 09:40:33.416340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.969 [2024-07-15 09:40:33.416572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.969 [2024-07-15 09:40:33.416851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.969 [2024-07-15 09:40:33.417066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.969 [2024-07-15 09:40:33.417071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.969 [2024-07-15 09:40:33.416906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.232 [2024-07-15 09:40:33.481204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:39.797 09:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:40.361 09:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:40.361 09:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:40.619 09:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:40.619 09:40:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.877 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:40.877 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:40.877 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:40.877 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:40.877 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:41.195 [2024-07-15 09:40:35.377473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.195 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:41.468 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:41.468 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.468 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:41.468 09:40:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:41.726 09:40:36 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.293 [2024-07-15 09:40:36.489376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.293 09:40:36 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.293 09:40:36 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:42.293 09:40:36 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:42.293 09:40:36 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:42.293 09:40:36 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:43.670 Initializing NVMe Controllers 00:14:43.670 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:43.670 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:43.670 Initialization complete. Launching workers. 00:14:43.670 ======================================================== 00:14:43.670 Latency(us) 00:14:43.670 Device Information : IOPS MiB/s Average min max 00:14:43.670 PCIE (0000:00:10.0) NSID 1 from core 0: 21240.84 82.97 1505.66 288.36 9238.21 00:14:43.670 ======================================================== 00:14:43.670 Total : 21240.84 82.97 1505.66 288.36 9238.21 00:14:43.670 00:14:43.670 09:40:37 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:45.047 Initializing NVMe Controllers 00:14:45.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:45.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:45.047 Initialization complete. Launching workers. 00:14:45.047 ======================================================== 00:14:45.047 Latency(us) 00:14:45.047 Device Information : IOPS MiB/s Average min max 00:14:45.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3594.95 14.04 277.81 109.84 5230.54 00:14:45.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8120.30 7002.41 12011.46 00:14:45.047 ======================================================== 00:14:45.047 Total : 3718.95 14.53 539.30 109.84 12011.46 00:14:45.047 00:14:45.047 09:40:39 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:46.422 Initializing NVMe Controllers 00:14:46.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:46.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:46.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:46.423 Initialization complete. Launching workers. 00:14:46.423 ======================================================== 00:14:46.423 Latency(us) 00:14:46.423 Device Information : IOPS MiB/s Average min max 00:14:46.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8586.27 33.54 3727.38 742.82 9437.95 00:14:46.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3947.25 15.42 8118.95 5107.53 16783.53 00:14:46.423 ======================================================== 00:14:46.423 Total : 12533.52 48.96 5110.44 742.82 16783.53 00:14:46.423 00:14:46.423 09:40:40 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:46.423 09:40:40 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:48.973 Initializing NVMe Controllers 00:14:48.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.973 Controller IO queue size 128, less than required. 00:14:48.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.973 Controller IO queue size 128, less than required. 00:14:48.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:48.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:48.973 Initialization complete. Launching workers. 00:14:48.973 ======================================================== 00:14:48.973 Latency(us) 00:14:48.973 Device Information : IOPS MiB/s Average min max 00:14:48.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1536.86 384.22 84567.81 41806.14 139315.74 00:14:48.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.25 153.56 210016.97 74360.51 344294.78 00:14:48.973 ======================================================== 00:14:48.973 Total : 2151.11 537.78 120389.64 41806.14 344294.78 00:14:48.973 00:14:48.973 09:40:43 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:48.973 Initializing NVMe Controllers 00:14:48.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.973 Controller IO queue size 128, less than required. 00:14:48.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.973 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:48.973 Controller IO queue size 128, less than required. 00:14:48.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:48.973 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:48.973 WARNING: Some requested NVMe devices were skipped 00:14:48.973 No valid NVMe controllers or AIO or URING devices found 00:14:48.973 09:40:43 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:51.519 Initializing NVMe Controllers 00:14:51.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.519 Controller IO queue size 128, less than required. 00:14:51.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.519 Controller IO queue size 128, less than required. 00:14:51.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:51.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:51.519 Initialization complete. Launching workers. 00:14:51.519 00:14:51.519 ==================== 00:14:51.519 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:51.519 TCP transport: 00:14:51.519 polls: 9444 00:14:51.519 idle_polls: 5375 00:14:51.519 sock_completions: 4069 00:14:51.519 nvme_completions: 6533 00:14:51.519 submitted_requests: 9848 00:14:51.519 queued_requests: 1 00:14:51.519 00:14:51.519 ==================== 00:14:51.519 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:51.519 TCP transport: 00:14:51.519 polls: 11898 00:14:51.519 idle_polls: 7735 00:14:51.519 sock_completions: 4163 00:14:51.519 nvme_completions: 6413 00:14:51.519 submitted_requests: 9612 00:14:51.519 queued_requests: 1 00:14:51.519 ======================================================== 00:14:51.519 Latency(us) 00:14:51.519 Device Information : IOPS MiB/s Average min max 00:14:51.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1629.55 407.39 80570.36 42234.22 124591.99 00:14:51.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1599.61 399.90 80535.27 38916.68 132203.07 00:14:51.519 ======================================================== 00:14:51.519 Total : 3229.16 807.29 80552.98 38916.68 132203.07 00:14:51.519 00:14:51.519 09:40:45 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:51.519 09:40:45 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.777 09:40:46 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:51.777 09:40:46 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:51.777 09:40:46 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:51.777 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.777 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:14:51.777 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.778 rmmod nvme_tcp 00:14:51.778 rmmod nvme_fabrics 00:14:51.778 rmmod nvme_keyring 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75227 ']' 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75227 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75227 ']' 00:14:51.778 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75227 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75227 00:14:52.036 killing process with pid 75227 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75227' 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75227 00:14:52.036 09:40:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75227 00:14:52.601 09:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.601 09:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.601 09:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.601 09:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.601 09:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.601 09:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.601 09:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.602 09:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.602 09:40:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:52.860 ************************************ 00:14:52.860 END TEST nvmf_perf 00:14:52.860 ************************************ 00:14:52.860 00:14:52.860 real 0m14.472s 00:14:52.860 user 0m53.186s 00:14:52.860 sys 0m4.115s 00:14:52.860 09:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:52.860 09:40:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:52.860 09:40:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:52.860 09:40:47 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:52.860 09:40:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:52.860 09:40:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.860 09:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.860 ************************************ 00:14:52.860 START TEST nvmf_fio_host 00:14:52.860 ************************************ 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:52.860 * Looking for test storage... 00:14:52.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.860 09:40:47 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:52.861 Cannot find device "nvmf_tgt_br" 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.861 Cannot find device "nvmf_tgt_br2" 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:52.861 Cannot find device "nvmf_tgt_br" 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:52.861 Cannot find device "nvmf_tgt_br2" 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:14:52.861 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:53.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:14:53.119 00:14:53.119 --- 10.0.0.2 ping statistics --- 00:14:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.119 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:53.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:14:53.119 00:14:53.119 --- 10.0.0.3 ping statistics --- 00:14:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.119 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:53.119 00:14:53.119 --- 10.0.0.1 ping statistics --- 00:14:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.119 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.119 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:53.376 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75629 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75629 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75629 ']' 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.377 09:40:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:53.377 [2024-07-15 09:40:47.637766] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:53.377 [2024-07-15 09:40:47.637857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.377 [2024-07-15 09:40:47.773425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.635 [2024-07-15 09:40:47.895708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.635 [2024-07-15 09:40:47.895797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.635 [2024-07-15 09:40:47.895817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.635 [2024-07-15 09:40:47.895833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.635 [2024-07-15 09:40:47.895845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.635 [2024-07-15 09:40:47.896057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.635 [2024-07-15 09:40:47.896275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.635 [2024-07-15 09:40:47.896880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.635 [2024-07-15 09:40:47.896930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.635 [2024-07-15 09:40:47.959673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:54.200 09:40:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.200 09:40:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:14:54.200 09:40:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:54.458 [2024-07-15 09:40:48.838521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.458 09:40:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:54.458 09:40:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.458 09:40:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.458 09:40:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:55.023 Malloc1 00:14:55.023 09:40:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.280 09:40:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:55.538 09:40:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.796 [2024-07-15 09:40:50.098324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.796 09:40:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:56.054 09:40:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:56.312 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:56.312 fio-3.35 00:14:56.312 Starting 1 thread 00:14:58.851 00:14:58.851 test: (groupid=0, jobs=1): err= 0: pid=75712: Mon Jul 15 09:40:52 2024 00:14:58.851 read: IOPS=8306, BW=32.4MiB/s (34.0MB/s)(65.1MiB/2007msec) 00:14:58.851 slat (nsec): min=1815, max=274934, avg=2541.96, stdev=3119.28 00:14:58.851 clat (usec): min=1647, max=14883, avg=8023.76, stdev=783.18 00:14:58.851 lat (usec): min=1686, max=14885, avg=8026.31, stdev=782.98 00:14:58.851 clat percentiles (usec): 00:14:58.851 | 1.00th=[ 6718], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:14:58.851 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:14:58.851 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9372], 00:14:58.851 | 99.00th=[10552], 99.50th=[11076], 99.90th=[13566], 99.95th=[13960], 00:14:58.851 | 99.99th=[14877] 00:14:58.851 bw ( KiB/s): min=31888, max=34384, per=99.96%, avg=33216.00, stdev=1072.06, samples=4 00:14:58.851 iops : min= 7972, max= 8596, avg=8304.00, stdev=268.01, samples=4 00:14:58.851 write: IOPS=8310, BW=32.5MiB/s (34.0MB/s)(65.2MiB/2007msec); 0 zone resets 00:14:58.851 slat (nsec): min=1931, max=152856, avg=2647.60, stdev=1993.18 00:14:58.851 clat (usec): min=1513, max=13770, avg=7319.58, stdev=701.56 00:14:58.851 lat (usec): min=1522, max=13773, avg=7322.23, stdev=701.47 00:14:58.851 clat percentiles (usec): 00:14:58.851 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6849], 00:14:58.851 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:14:58.851 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8455], 00:14:58.851 | 99.00th=[ 9634], 99.50th=[ 9896], 99.90th=[11731], 99.95th=[12780], 00:14:58.851 | 99.99th=[13698] 00:14:58.851 bw ( KiB/s): min=32640, max=33976, per=99.97%, avg=33232.00, stdev=690.82, samples=4 00:14:58.851 iops : min= 8160, max= 8494, avg=8308.00, stdev=172.70, samples=4 00:14:58.851 lat (msec) : 2=0.04%, 4=0.13%, 10=98.35%, 20=1.48% 00:14:58.851 cpu : usr=66.60%, sys=25.22%, ctx=10, majf=0, minf=7 00:14:58.851 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:58.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:58.851 issued rwts: total=16672,16679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.851 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:58.851 00:14:58.851 Run status group 0 (all jobs): 00:14:58.851 READ: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=65.1MiB (68.3MB), run=2007-2007msec 00:14:58.851 WRITE: bw=32.5MiB/s (34.0MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.0MB/s), io=65.2MiB (68.3MB), run=2007-2007msec 00:14:58.851 09:40:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:58.851 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:58.851 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:58.851 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:58.852 09:40:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:58.852 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:58.852 fio-3.35 00:14:58.852 Starting 1 thread 00:15:01.382 00:15:01.382 test: (groupid=0, jobs=1): err= 0: pid=75763: Mon Jul 15 09:40:55 2024 00:15:01.382 read: IOPS=7636, BW=119MiB/s (125MB/s)(240MiB/2008msec) 00:15:01.382 slat (usec): min=3, max=120, avg= 3.98, stdev= 1.91 00:15:01.382 clat (usec): min=2873, max=20285, avg=9351.52, stdev=2540.36 00:15:01.382 lat (usec): min=2877, max=20289, avg=9355.50, stdev=2540.40 00:15:01.382 clat percentiles (usec): 00:15:01.382 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 7046], 00:15:01.382 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:15:01.382 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12780], 95.00th=[13698], 00:15:01.382 | 99.00th=[15795], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:15:01.382 | 99.99th=[18220] 00:15:01.382 bw ( KiB/s): min=49056, max=73024, per=50.68%, avg=61928.00, stdev=9975.22, samples=4 00:15:01.382 iops : min= 3066, max= 4564, avg=3870.50, stdev=623.45, samples=4 00:15:01.382 write: IOPS=4460, BW=69.7MiB/s (73.1MB/s)(127MiB/1823msec); 0 zone resets 00:15:01.382 slat (usec): min=36, max=233, avg=38.82, stdev= 6.15 00:15:01.382 clat (usec): min=3612, max=23838, avg=13107.83, stdev=2689.03 00:15:01.382 lat (usec): min=3649, max=23890, avg=13146.65, stdev=2689.73 00:15:01.382 clat percentiles (usec): 00:15:01.382 | 1.00th=[ 8225], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10683], 00:15:01.382 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12649], 60.00th=[13435], 00:15:01.382 | 70.00th=[14353], 80.00th=[15401], 90.00th=[16712], 95.00th=[17695], 00:15:01.382 | 99.00th=[20841], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:15:01.382 | 99.99th=[23725] 00:15:01.382 bw ( KiB/s): min=51168, max=74208, per=90.46%, avg=64560.00, stdev=9710.49, samples=4 00:15:01.382 iops : min= 3198, max= 4638, avg=4035.00, stdev=606.91, samples=4 00:15:01.382 lat (msec) : 4=0.19%, 10=42.35%, 20=56.88%, 50=0.58% 00:15:01.382 cpu : usr=81.61%, sys=14.15%, ctx=4, majf=0, minf=12 00:15:01.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:01.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.382 issued rwts: total=15335,8132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.382 00:15:01.382 Run status group 0 (all jobs): 00:15:01.382 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=240MiB (251MB), run=2008-2008msec 00:15:01.382 WRITE: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=127MiB (133MB), run=1823-1823msec 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.382 rmmod nvme_tcp 00:15:01.382 rmmod nvme_fabrics 00:15:01.382 rmmod nvme_keyring 00:15:01.382 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75629 ']' 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75629 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75629 ']' 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75629 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75629 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.383 killing process with pid 75629 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75629' 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75629 00:15:01.383 09:40:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75629 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:01.641 00:15:01.641 real 0m8.956s 00:15:01.641 user 0m36.611s 00:15:01.641 sys 0m2.491s 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.641 09:40:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:01.641 ************************************ 00:15:01.641 END TEST nvmf_fio_host 00:15:01.641 ************************************ 00:15:01.898 09:40:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:01.898 09:40:56 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:01.898 09:40:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:01.898 09:40:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.898 09:40:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:01.898 ************************************ 00:15:01.898 START TEST nvmf_failover 00:15:01.898 ************************************ 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:01.898 * Looking for test storage... 00:15:01.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:01.898 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:01.899 Cannot find device "nvmf_tgt_br" 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:01.899 Cannot find device "nvmf_tgt_br2" 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:01.899 Cannot find device "nvmf_tgt_br" 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:01.899 Cannot find device "nvmf_tgt_br2" 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:01.899 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:02.194 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:02.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:15:02.195 00:15:02.195 --- 10.0.0.2 ping statistics --- 00:15:02.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.195 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:02.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:02.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:02.195 00:15:02.195 --- 10.0.0.3 ping statistics --- 00:15:02.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.195 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:02.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:02.195 00:15:02.195 --- 10.0.0.1 ping statistics --- 00:15:02.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.195 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75972 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75972 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 75972 ']' 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:02.195 09:40:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:02.453 [2024-07-15 09:40:56.669059] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:02.453 [2024-07-15 09:40:56.669178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.453 [2024-07-15 09:40:56.806847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.710 [2024-07-15 09:40:56.931353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.710 [2024-07-15 09:40:56.931586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.710 [2024-07-15 09:40:56.931678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.710 [2024-07-15 09:40:56.931751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.710 [2024-07-15 09:40:56.931825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.710 [2024-07-15 09:40:56.932076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.710 [2024-07-15 09:40:56.932807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.710 [2024-07-15 09:40:56.932843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.710 [2024-07-15 09:40:56.990251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.273 09:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.273 09:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:03.273 09:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.273 09:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:03.273 09:40:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:03.546 09:40:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.546 09:40:57 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:03.546 [2024-07-15 09:40:57.995853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.803 09:40:58 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:04.060 Malloc0 00:15:04.060 09:40:58 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:04.318 09:40:58 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:04.575 09:40:58 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.832 [2024-07-15 09:40:59.101019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.832 09:40:59 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:05.090 [2024-07-15 09:40:59.433276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:05.090 09:40:59 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:05.364 [2024-07-15 09:40:59.689446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:05.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=76030 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 76030 /var/tmp/bdevperf.sock 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76030 ']' 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.364 09:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:06.318 09:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.318 09:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:06.318 09:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:06.883 NVMe0n1 00:15:06.883 09:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:07.236 00:15:07.236 09:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=76059 00:15:07.236 09:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.236 09:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:08.190 09:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.449 [2024-07-15 09:41:02.752818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.752993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.449 [2024-07-15 09:41:02.753162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753327] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 [2024-07-15 09:41:02.753662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99b950 is same with the state(5) to be set 00:15:08.450 09:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:11.753 09:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:11.753 00:15:11.753 09:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:12.010 09:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:15.291 09:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.291 [2024-07-15 09:41:09.671945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.291 09:41:09 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:16.682 09:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:16.682 09:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 76059 00:15:23.241 0 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 76030 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76030 ']' 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76030 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76030 00:15:23.241 killing process with pid 76030 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76030' 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76030 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76030 00:15:23.241 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:23.241 [2024-07-15 09:40:59.758805] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:23.241 [2024-07-15 09:40:59.758922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76030 ] 00:15:23.241 [2024-07-15 09:40:59.891161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.241 [2024-07-15 09:41:00.017457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.241 [2024-07-15 09:41:00.074163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:23.241 Running I/O for 15 seconds... 00:15:23.241 [2024-07-15 09:41:02.754260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.241 [2024-07-15 09:41:02.754738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.241 [2024-07-15 09:41:02.754753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.754782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.754812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.754847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.754876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.754921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.754951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.754980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.754993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.242 [2024-07-15 09:41:02.755755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.242 [2024-07-15 09:41:02.755769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.755793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.755818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.755837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.755851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.755867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.755880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.755906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.755921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.755937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.755951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.755967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.755980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.755995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.243 [2024-07-15 09:41:02.756869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.243 [2024-07-15 09:41:02.756884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.756909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.756926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.756940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.756956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.756974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.756997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.757011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.757040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.757068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.757097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.757140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.757169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.244 [2024-07-15 09:41:02.757616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.244 [2024-07-15 09:41:02.757644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b77c0 is same with the state(5) to be set 00:15:23.244 [2024-07-15 09:41:02.757676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.244 [2024-07-15 09:41:02.757686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.244 [2024-07-15 09:41:02.757697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:15:23.244 [2024-07-15 09:41:02.757709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.244 [2024-07-15 09:41:02.757734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.244 [2024-07-15 09:41:02.757744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61728 len:8 PRP1 0x0 PRP2 0x0 00:15:23.244 [2024-07-15 09:41:02.757764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.244 [2024-07-15 09:41:02.757794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.244 [2024-07-15 09:41:02.757804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61736 len:8 PRP1 0x0 PRP2 0x0 00:15:23.244 [2024-07-15 09:41:02.757817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.244 [2024-07-15 09:41:02.757840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.244 [2024-07-15 09:41:02.757850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61744 len:8 PRP1 0x0 PRP2 0x0 00:15:23.244 [2024-07-15 09:41:02.757871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.244 [2024-07-15 09:41:02.757885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.244 [2024-07-15 09:41:02.757907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.244 [2024-07-15 09:41:02.757918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61752 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.757932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.757945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.757955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.757965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61760 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.757978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.757992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61768 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61776 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61784 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61792 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61800 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61808 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61816 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61824 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61832 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61840 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61848 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61856 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61864 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.245 [2024-07-15 09:41:02.758650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.245 [2024-07-15 09:41:02.758660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61872 len:8 PRP1 0x0 PRP2 0x0 00:15:23.245 [2024-07-15 09:41:02.758673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758739] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7b77c0 was disconnected and freed. reset controller. 00:15:23.245 [2024-07-15 09:41:02.758758] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:23.245 [2024-07-15 09:41:02.758813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.245 [2024-07-15 09:41:02.758833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.245 [2024-07-15 09:41:02.758861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.245 [2024-07-15 09:41:02.758889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.245 [2024-07-15 09:41:02.758930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.245 [2024-07-15 09:41:02.758944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:23.245 [2024-07-15 09:41:02.762918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.245 [2024-07-15 09:41:02.762964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x766570 (9): Bad file descriptor 00:15:23.245 [2024-07-15 09:41:02.799206] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.246 [2024-07-15 09:41:06.401418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.246 [2024-07-15 09:41:06.401744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.401774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.401803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.401832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.401861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.401890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.401936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.401976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.401992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.246 [2024-07-15 09:41:06.402328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.246 [2024-07-15 09:41:06.402350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.402720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.247 [2024-07-15 09:41:06.402975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.402991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.247 [2024-07-15 09:41:06.403615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.247 [2024-07-15 09:41:06.403628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.403657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.403953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.403983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.403999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.248 [2024-07-15 09:41:06.404436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.248 [2024-07-15 09:41:06.404677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.248 [2024-07-15 09:41:06.404693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.404925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.404954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.404983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.404998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.405012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.405050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.405078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.405116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.405155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:06.405184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.405212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.405241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.405270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.405305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.405334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.405362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:06.405397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e8d30 is same with the state(5) to be set 00:15:23.249 [2024-07-15 09:41:06.405428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.249 [2024-07-15 09:41:06.405445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.249 [2024-07-15 09:41:06.405457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65688 len:8 PRP1 0x0 PRP2 0x0 00:15:23.249 [2024-07-15 09:41:06.405470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405527] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7e8d30 was disconnected and freed. reset controller. 00:15:23.249 [2024-07-15 09:41:06.405546] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:23.249 [2024-07-15 09:41:06.405601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:06.405621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:06.405650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:06.405677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:06.405704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:06.405717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:23.249 [2024-07-15 09:41:06.405751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x766570 (9): Bad file descriptor 00:15:23.249 [2024-07-15 09:41:06.409565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.249 [2024-07-15 09:41:06.449737] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.249 [2024-07-15 09:41:10.921340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:10.921428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.921450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:10.921464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.921479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:10.921493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.921507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.249 [2024-07-15 09:41:10.921520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.921534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x766570 is same with the state(5) to be set 00:15:23.249 [2024-07-15 09:41:10.922798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.249 [2024-07-15 09:41:10.922857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.922883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:10.922913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.922931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:10.922945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.922960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:10.922973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.922989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:10.923002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.923018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.249 [2024-07-15 09:41:10.923032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.249 [2024-07-15 09:41:10.923047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.250 [2024-07-15 09:41:10.923599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.923971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.923987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.924000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.924023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.924039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.924054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.924069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.924085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.924098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.250 [2024-07-15 09:41:10.924114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.250 [2024-07-15 09:41:10.924128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.251 [2024-07-15 09:41:10.924157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.251 [2024-07-15 09:41:10.924185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.251 [2024-07-15 09:41:10.924214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.251 [2024-07-15 09:41:10.924919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.251 [2024-07-15 09:41:10.924935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.924959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.924974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.924989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:23.252 [2024-07-15 09:41:10.925912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.925971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.925987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.926000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.926016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.926030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.926054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.926069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.926085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.926098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.926115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.252 [2024-07-15 09:41:10.926128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.926143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cab60 is same with the state(5) to be set 00:15:23.252 [2024-07-15 09:41:10.926161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.252 [2024-07-15 09:41:10.926172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.252 [2024-07-15 09:41:10.926183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126920 len:8 PRP1 0x0 PRP2 0x0 00:15:23.252 [2024-07-15 09:41:10.926196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.252 [2024-07-15 09:41:10.926211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.252 [2024-07-15 09:41:10.926223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.252 [2024-07-15 09:41:10.926234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127376 len:8 PRP1 0x0 PRP2 0x0 00:15:23.252 [2024-07-15 09:41:10.926247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127384 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127392 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127400 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127408 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127416 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127424 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127432 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127440 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127448 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127456 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127464 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127472 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127480 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.926950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.926961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.926971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127488 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.926985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.927008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.927019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.927029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127496 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.927042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.927056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.927066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.927076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127504 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.927089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.927103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.927112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.927123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127512 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.927136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.927149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.927159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.927169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127520 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.927187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.253 [2024-07-15 09:41:10.927201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.253 [2024-07-15 09:41:10.927211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.253 [2024-07-15 09:41:10.927224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127528 len:8 PRP1 0x0 PRP2 0x0 00:15:23.253 [2024-07-15 09:41:10.927237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.254 [2024-07-15 09:41:10.927263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.254 [2024-07-15 09:41:10.927274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.254 [2024-07-15 09:41:10.927284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127536 len:8 PRP1 0x0 PRP2 0x0 00:15:23.254 [2024-07-15 09:41:10.927298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.254 [2024-07-15 09:41:10.927311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.254 [2024-07-15 09:41:10.927320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.254 [2024-07-15 09:41:10.927331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127544 len:8 PRP1 0x0 PRP2 0x0 00:15:23.254 [2024-07-15 09:41:10.927344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.254 [2024-07-15 09:41:10.927357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:23.254 [2024-07-15 09:41:10.927367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:23.254 [2024-07-15 09:41:10.927377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127552 len:8 PRP1 0x0 PRP2 0x0 00:15:23.254 [2024-07-15 09:41:10.927390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.254 [2024-07-15 09:41:10.927470] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7cab60 was disconnected and freed. reset controller. 00:15:23.254 [2024-07-15 09:41:10.927489] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:23.254 [2024-07-15 09:41:10.927504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:23.254 [2024-07-15 09:41:10.931514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.254 [2024-07-15 09:41:10.931576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x766570 (9): Bad file descriptor 00:15:23.254 [2024-07-15 09:41:10.967331] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.254 00:15:23.254 Latency(us) 00:15:23.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.254 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:23.254 Verification LBA range: start 0x0 length 0x4000 00:15:23.254 NVMe0n1 : 15.01 8712.50 34.03 224.74 0.00 14288.89 662.81 17515.99 00:15:23.254 =================================================================================================================== 00:15:23.254 Total : 8712.50 34.03 224.74 0.00 14288.89 662.81 17515.99 00:15:23.254 Received shutdown signal, test time was about 15.000000 seconds 00:15:23.254 00:15:23.254 Latency(us) 00:15:23.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.254 =================================================================================================================== 00:15:23.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76232 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76232 /var/tmp/bdevperf.sock 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76232 ']' 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.254 09:41:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:23.512 09:41:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.512 09:41:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:23.512 09:41:17 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:23.771 [2024-07-15 09:41:18.230482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:24.029 09:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:24.029 [2024-07-15 09:41:18.482758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:24.287 09:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:24.545 NVMe0n1 00:15:24.545 09:41:18 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:24.804 00:15:24.804 09:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:25.061 00:15:25.319 09:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.319 09:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:25.319 09:41:19 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:26.014 09:41:20 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:29.319 09:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.319 09:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:29.319 09:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76313 00:15:29.319 09:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:29.319 09:41:23 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76313 00:15:30.253 0 00:15:30.253 09:41:24 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:30.253 [2024-07-15 09:41:16.977688] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:30.253 [2024-07-15 09:41:16.978384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76232 ] 00:15:30.253 [2024-07-15 09:41:17.115023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.253 [2024-07-15 09:41:17.230078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.253 [2024-07-15 09:41:17.284131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:30.253 [2024-07-15 09:41:20.085911] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:30.253 [2024-07-15 09:41:20.086561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.253 [2024-07-15 09:41:20.086684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.253 [2024-07-15 09:41:20.086818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.253 [2024-07-15 09:41:20.086941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.253 [2024-07-15 09:41:20.087043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.253 [2024-07-15 09:41:20.087142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.253 [2024-07-15 09:41:20.087258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.253 [2024-07-15 09:41:20.087345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.253 [2024-07-15 09:41:20.087432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:30.253 [2024-07-15 09:41:20.087574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:30.253 [2024-07-15 09:41:20.087704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad9570 (9): Bad file descriptor 00:15:30.253 [2024-07-15 09:41:20.090861] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:30.253 Running I/O for 1 seconds... 00:15:30.253 00:15:30.253 Latency(us) 00:15:30.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.253 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:30.253 Verification LBA range: start 0x0 length 0x4000 00:15:30.253 NVMe0n1 : 1.02 6422.64 25.09 0.00 0.00 19787.80 2546.97 26333.56 00:15:30.253 =================================================================================================================== 00:15:30.253 Total : 6422.64 25.09 0.00 0.00 19787.80 2546.97 26333.56 00:15:30.253 09:41:24 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:30.253 09:41:24 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:30.511 09:41:24 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:30.769 09:41:25 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:30.769 09:41:25 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:31.028 09:41:25 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:31.286 09:41:25 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76232 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76232 ']' 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76232 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76232 00:15:34.578 killing process with pid 76232 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76232' 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76232 00:15:34.578 09:41:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76232 00:15:34.836 09:41:29 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:34.836 09:41:29 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.095 rmmod nvme_tcp 00:15:35.095 rmmod nvme_fabrics 00:15:35.095 rmmod nvme_keyring 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75972 ']' 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75972 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 75972 ']' 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 75972 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75972 00:15:35.095 killing process with pid 75972 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75972' 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 75972 00:15:35.095 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 75972 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.354 09:41:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:35.613 00:15:35.613 real 0m33.681s 00:15:35.613 user 2m10.145s 00:15:35.613 sys 0m6.093s 00:15:35.613 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:35.613 09:41:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:35.613 ************************************ 00:15:35.613 END TEST nvmf_failover 00:15:35.613 ************************************ 00:15:35.613 09:41:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:35.613 09:41:29 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:35.613 09:41:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:35.613 09:41:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.613 09:41:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:35.613 ************************************ 00:15:35.613 START TEST nvmf_host_discovery 00:15:35.613 ************************************ 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:35.613 * Looking for test storage... 00:15:35.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.613 09:41:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:35.614 09:41:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:35.614 Cannot find device "nvmf_tgt_br" 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.614 Cannot find device "nvmf_tgt_br2" 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:35.614 Cannot find device "nvmf_tgt_br" 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:35.614 Cannot find device "nvmf_tgt_br2" 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:35.614 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:35.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:35.873 00:15:35.873 --- 10.0.0.2 ping statistics --- 00:15:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.873 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:35.873 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.873 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:35.873 00:15:35.873 --- 10.0.0.3 ping statistics --- 00:15:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.873 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:35.873 00:15:35.873 --- 10.0.0.1 ping statistics --- 00:15:35.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.873 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.873 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76582 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76582 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76582 ']' 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.874 09:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.132 [2024-07-15 09:41:30.348824] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:36.132 [2024-07-15 09:41:30.348903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.132 [2024-07-15 09:41:30.486770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.390 [2024-07-15 09:41:30.607553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.390 [2024-07-15 09:41:30.608021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.390 [2024-07-15 09:41:30.608266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.390 [2024-07-15 09:41:30.608528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.390 [2024-07-15 09:41:30.608769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.390 [2024-07-15 09:41:30.609007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.390 [2024-07-15 09:41:30.662519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.957 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.957 [2024-07-15 09:41:31.419034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.215 [2024-07-15 09:41:31.427150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.215 null0 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.215 null1 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.215 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76614 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76614 /tmp/host.sock 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76614 ']' 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.215 09:41:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:37.215 [2024-07-15 09:41:31.507586] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:37.215 [2024-07-15 09:41:31.507990] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76614 ] 00:15:37.215 [2024-07-15 09:41:31.646832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.473 [2024-07-15 09:41:31.775316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.473 [2024-07-15 09:41:31.829855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:38.406 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.407 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:38.407 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 [2024-07-15 09:41:32.911544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:38.664 09:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:38.664 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:38.665 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.922 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:15:38.922 09:41:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:15:39.180 [2024-07-15 09:41:33.532668] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:39.180 [2024-07-15 09:41:33.532930] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:39.180 [2024-07-15 09:41:33.532998] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:39.180 [2024-07-15 09:41:33.538730] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:39.180 [2024-07-15 09:41:33.596325] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:39.180 [2024-07-15 09:41:33.596525] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.802 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:40.076 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 [2024-07-15 09:41:34.505105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:40.077 [2024-07-15 09:41:34.505956] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:40.077 [2024-07-15 09:41:34.505990] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:40.077 [2024-07-15 09:41:34.511925] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.077 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.336 [2024-07-15 09:41:34.576308] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:40.336 [2024-07-15 09:41:34.576335] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:40.336 [2024-07-15 09:41:34.576342] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.336 [2024-07-15 09:41:34.746164] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:40.336 [2024-07-15 09:41:34.746200] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:40.336 [2024-07-15 09:41:34.752156] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:40.336 [2024-07-15 09:41:34.752189] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:40.336 [2024-07-15 09:41:34.752309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.336 [2024-07-15 09:41:34.752343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.336 [2024-07-15 09:41:34.752373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.336 [2024-07-15 09:41:34.752383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.336 [2024-07-15 09:41:34.752393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.336 [2024-07-15 09:41:34.752403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.336 [2024-07-15 09:41:34.752413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.336 [2024-07-15 09:41:34.752422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.336 [2024-07-15 09:41:34.752432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a4e600 is same with the state(5) to be set 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.336 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.595 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:40.596 09:41:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.596 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.854 09:41:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.788 [2024-07-15 09:41:36.173301] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:41.788 [2024-07-15 09:41:36.173339] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:41.788 [2024-07-15 09:41:36.173359] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:41.788 [2024-07-15 09:41:36.179330] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:41.788 [2024-07-15 09:41:36.239833] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:41.788 [2024-07-15 09:41:36.240223] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.788 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 request: 00:15:42.046 { 00:15:42.046 "name": "nvme", 00:15:42.046 "trtype": "tcp", 00:15:42.046 "traddr": "10.0.0.2", 00:15:42.046 "adrfam": "ipv4", 00:15:42.046 "trsvcid": "8009", 00:15:42.046 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:42.046 "wait_for_attach": true, 00:15:42.046 "method": "bdev_nvme_start_discovery", 00:15:42.046 "req_id": 1 00:15:42.046 } 00:15:42.046 Got JSON-RPC error response 00:15:42.046 response: 00:15:42.046 { 00:15:42.046 "code": -17, 00:15:42.046 "message": "File exists" 00:15:42.046 } 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 request: 00:15:42.046 { 00:15:42.046 "name": "nvme_second", 00:15:42.046 "trtype": "tcp", 00:15:42.046 "traddr": "10.0.0.2", 00:15:42.046 "adrfam": "ipv4", 00:15:42.046 "trsvcid": "8009", 00:15:42.046 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:42.046 "wait_for_attach": true, 00:15:42.046 "method": "bdev_nvme_start_discovery", 00:15:42.046 "req_id": 1 00:15:42.046 } 00:15:42.046 Got JSON-RPC error response 00:15:42.046 response: 00:15:42.046 { 00:15:42.046 "code": -17, 00:15:42.046 "message": "File exists" 00:15:42.046 } 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:42.046 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.047 09:41:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:43.417 [2024-07-15 09:41:37.512689] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:43.417 [2024-07-15 09:41:37.512749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1e070 with addr=10.0.0.2, port=8010 00:15:43.417 [2024-07-15 09:41:37.512776] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:43.417 [2024-07-15 09:41:37.512787] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:43.417 [2024-07-15 09:41:37.512797] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:44.348 [2024-07-15 09:41:38.512726] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:44.348 [2024-07-15 09:41:38.512824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1e070 with addr=10.0.0.2, port=8010 00:15:44.348 [2024-07-15 09:41:38.512850] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:44.348 [2024-07-15 09:41:38.512862] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:44.348 [2024-07-15 09:41:38.512873] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:45.376 [2024-07-15 09:41:39.512559] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:45.376 request: 00:15:45.376 { 00:15:45.376 "name": "nvme_second", 00:15:45.376 "trtype": "tcp", 00:15:45.376 "traddr": "10.0.0.2", 00:15:45.376 "adrfam": "ipv4", 00:15:45.376 "trsvcid": "8010", 00:15:45.376 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:45.376 "wait_for_attach": false, 00:15:45.376 "attach_timeout_ms": 3000, 00:15:45.376 "method": "bdev_nvme_start_discovery", 00:15:45.376 "req_id": 1 00:15:45.376 } 00:15:45.376 Got JSON-RPC error response 00:15:45.376 response: 00:15:45.376 { 00:15:45.376 "code": -110, 00:15:45.376 "message": "Connection timed out" 00:15:45.376 } 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76614 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:45.376 rmmod nvme_tcp 00:15:45.376 rmmod nvme_fabrics 00:15:45.376 rmmod nvme_keyring 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76582 ']' 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76582 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76582 ']' 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76582 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76582 00:15:45.376 killing process with pid 76582 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76582' 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76582 00:15:45.376 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76582 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.642 09:41:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:45.642 ************************************ 00:15:45.642 END TEST nvmf_host_discovery 00:15:45.642 ************************************ 00:15:45.642 00:15:45.642 real 0m10.125s 00:15:45.642 user 0m19.630s 00:15:45.642 sys 0m1.966s 00:15:45.642 09:41:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.642 09:41:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.642 09:41:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:45.642 09:41:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:45.642 09:41:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:45.642 09:41:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.642 09:41:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.642 ************************************ 00:15:45.642 START TEST nvmf_host_multipath_status 00:15:45.642 ************************************ 00:15:45.642 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:45.900 * Looking for test storage... 00:15:45.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.900 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:45.901 Cannot find device "nvmf_tgt_br" 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.901 Cannot find device "nvmf_tgt_br2" 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:45.901 Cannot find device "nvmf_tgt_br" 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:45.901 Cannot find device "nvmf_tgt_br2" 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:45.901 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:46.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:46.159 00:15:46.159 --- 10.0.0.2 ping statistics --- 00:15:46.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.159 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:46.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:46.159 00:15:46.159 --- 10.0.0.3 ping statistics --- 00:15:46.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.159 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:46.159 00:15:46.159 --- 10.0.0.1 ping statistics --- 00:15:46.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.159 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=77067 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 77067 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77067 ']' 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.159 09:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:46.159 [2024-07-15 09:41:40.554201] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:46.159 [2024-07-15 09:41:40.554599] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.417 [2024-07-15 09:41:40.691962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:46.417 [2024-07-15 09:41:40.827291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.417 [2024-07-15 09:41:40.827360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.417 [2024-07-15 09:41:40.827375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.417 [2024-07-15 09:41:40.827385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.417 [2024-07-15 09:41:40.827394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.417 [2024-07-15 09:41:40.827999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.417 [2024-07-15 09:41:40.828013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.674 [2024-07-15 09:41:40.886113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=77067 00:15:47.239 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:47.497 [2024-07-15 09:41:41.868003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.497 09:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:47.755 Malloc0 00:15:47.755 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:48.013 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:48.270 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.528 [2024-07-15 09:41:42.868321] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.528 09:41:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:48.787 [2024-07-15 09:41:43.184469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77123 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77123 /var/tmp/bdevperf.sock 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77123 ']' 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:48.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.787 09:41:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:50.185 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:50.185 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:15:50.185 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:50.185 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:15:50.442 Nvme0n1 00:15:50.442 09:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:50.700 Nvme0n1 00:15:50.991 09:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:50.991 09:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:52.893 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:52.893 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:53.152 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:53.410 09:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:54.378 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:54.378 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:54.378 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.378 09:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:54.635 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.635 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:54.635 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.635 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:54.893 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:54.893 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:54.893 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.893 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:55.151 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.151 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:55.151 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.151 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:55.409 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.409 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:55.409 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.409 09:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:55.666 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.666 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:55.666 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.666 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:55.923 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.923 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:55.923 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:56.187 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:15:56.453 09:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:57.825 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:57.825 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:57.825 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.825 09:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:57.825 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:57.825 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:57.825 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:57.825 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.084 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.084 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:58.084 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.084 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:58.343 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.343 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.343 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.343 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.602 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.602 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:58.602 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.602 09:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.862 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.862 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:58.862 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.862 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:59.120 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:59.120 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:59.120 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:59.378 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:15:59.636 09:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:00.569 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:00.569 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:00.569 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.569 09:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:00.827 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:00.827 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:00.828 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.828 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.086 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.086 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.086 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.086 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.375 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.375 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.375 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.375 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.635 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.635 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:01.635 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.635 09:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:01.893 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.893 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:01.893 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.893 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:02.152 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.152 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:02.152 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:02.410 09:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:02.668 09:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.042 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.300 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:04.300 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.300 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:04.300 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.558 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.558 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:04.558 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.558 09:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:04.816 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.816 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:04.816 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.816 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:05.074 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.074 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:05.074 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.074 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:05.333 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:05.333 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:05.333 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:05.591 09:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:05.849 09:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.222 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:07.480 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:07.480 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:07.480 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.480 09:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:07.738 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.738 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:07.738 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.738 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:07.997 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:07.997 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:07.997 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:07.997 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:08.254 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.254 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:08.254 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.254 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:08.512 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.512 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:08.512 09:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:08.769 09:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:09.027 09:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:10.400 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:10.400 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:10.400 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.400 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:10.400 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.401 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:10.401 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.401 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.659 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.659 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.659 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:10.659 09:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.917 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.917 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:10.917 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.917 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.176 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.176 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:11.176 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.176 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.434 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:11.434 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:11.435 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.435 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.693 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.693 09:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:11.951 09:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:11.951 09:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:12.210 09:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:12.468 09:42:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:13.402 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:13.402 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:13.402 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.402 09:42:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:13.659 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.659 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:13.659 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.659 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:13.917 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:13.917 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:13.917 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.917 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.491 09:42:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.749 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.749 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:14.749 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.749 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.007 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.007 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:15.007 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:15.265 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:15.524 09:42:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:16.459 09:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:16.459 09:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:16.459 09:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.459 09:42:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.717 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:16.717 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:16.718 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.718 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.284 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.284 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.284 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.284 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.542 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.542 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.542 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.542 09:42:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.800 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.800 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.800 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.800 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:18.058 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.058 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:18.058 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.058 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:18.316 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.316 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:18.316 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:18.574 09:42:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:18.832 09:42:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:20.205 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:20.205 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:20.205 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.205 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.205 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.205 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:20.206 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.206 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.464 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.464 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.464 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.464 09:42:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.725 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.725 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.725 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.725 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:20.983 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.983 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:20.983 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:20.983 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.242 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.242 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:21.242 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.242 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.499 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.499 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:21.499 09:42:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:21.759 09:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:22.071 09:42:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:23.004 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:23.004 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:23.004 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.004 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.262 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.262 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.262 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.262 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.520 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.520 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:23.520 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.520 09:42:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:23.778 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.778 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:23.778 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.778 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:24.036 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.036 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:24.036 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:24.036 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.602 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.602 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:24.602 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.602 09:42:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77123 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77123 ']' 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77123 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77123 00:16:24.602 killing process with pid 77123 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77123' 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77123 00:16:24.602 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77123 00:16:24.865 Connection closed with partial response: 00:16:24.865 00:16:24.865 00:16:24.865 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77123 00:16:24.865 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:24.865 [2024-07-15 09:41:43.254828] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:24.865 [2024-07-15 09:41:43.255017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77123 ] 00:16:24.865 [2024-07-15 09:41:43.389001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.865 [2024-07-15 09:41:43.506788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.865 [2024-07-15 09:41:43.561435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:24.865 Running I/O for 90 seconds... 00:16:24.865 [2024-07-15 09:41:59.962111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.865 [2024-07-15 09:41:59.962931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.962956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.865 [2024-07-15 09:41:59.963031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.963092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.865 [2024-07-15 09:41:59.963114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.963140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.865 [2024-07-15 09:41:59.963158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.963184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.865 [2024-07-15 09:41:59.963202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:24.865 [2024-07-15 09:41:59.963229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.865 [2024-07-15 09:41:59.963248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.963296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.963340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.963384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.963928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.963947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.866 [2024-07-15 09:41:59.964560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.964954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.964973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:24.866 [2024-07-15 09:41:59.965601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.866 [2024-07-15 09:41:59.965620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.965644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.965674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.965701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.965720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.965744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.965762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.965787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.965828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.965854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.965889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.965915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.965933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.965958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.965975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.966741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.966898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.966947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.867 [2024-07-15 09:41:59.967616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:24.867 [2024-07-15 09:41:59.967837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.867 [2024-07-15 09:41:59.967856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.967883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.967902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.967940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.967964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.967991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.968037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.968133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.968202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.968278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.968355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.968435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.968549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.968576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.969397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:41:59.969893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.969955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.970049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.970103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.970175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.970236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.970290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.970344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:41:59.970398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:41:59.970417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.316564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.316653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:42:16.316696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:42:16.316741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:42:16.316808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:42:16.316848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:42:16.316885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:42:16.316944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.316966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.868 [2024-07-15 09:42:16.316981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.317003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.317017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.317039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.317053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.317086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.317108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.317131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.317146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.317167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.317181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:24.868 [2024-07-15 09:42:16.317202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.868 [2024-07-15 09:42:16.317217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.317764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.317975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.317997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.318012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.318156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.869 [2024-07-15 09:42:16.318381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.318418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.318460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.318482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.318497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.320078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.320108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.320137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.869 [2024-07-15 09:42:16.320155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:24.869 [2024-07-15 09:42:16.320193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.870 [2024-07-15 09:42:16.320209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.870 [2024-07-15 09:42:16.320246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.870 [2024-07-15 09:42:16.320412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.870 [2024-07-15 09:42:16.320449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.870 [2024-07-15 09:42:16.320486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.870 [2024-07-15 09:42:16.320522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.870 [2024-07-15 09:42:16.320559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:24.870 [2024-07-15 09:42:16.320768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:24.870 [2024-07-15 09:42:16.320783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:24.870 Received shutdown signal, test time was about 33.768153 seconds 00:16:24.870 00:16:24.870 Latency(us) 00:16:24.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.870 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:24.870 Verification LBA range: start 0x0 length 0x4000 00:16:24.870 Nvme0n1 : 33.77 8009.05 31.29 0.00 0.00 15947.37 1079.85 4026531.84 00:16:24.870 =================================================================================================================== 00:16:24.870 Total : 8009.05 31.29 0.00 0.00 15947.37 1079.85 4026531.84 00:16:24.870 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.128 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.128 rmmod nvme_tcp 00:16:25.386 rmmod nvme_fabrics 00:16:25.386 rmmod nvme_keyring 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 77067 ']' 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 77067 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77067 ']' 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77067 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77067 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77067' 00:16:25.386 killing process with pid 77067 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77067 00:16:25.386 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77067 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:25.644 00:16:25.644 real 0m39.899s 00:16:25.644 user 2m8.545s 00:16:25.644 sys 0m12.276s 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.644 09:42:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:25.644 ************************************ 00:16:25.644 END TEST nvmf_host_multipath_status 00:16:25.644 ************************************ 00:16:25.644 09:42:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:25.644 09:42:19 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:25.644 09:42:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:25.644 09:42:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.644 09:42:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:25.644 ************************************ 00:16:25.644 START TEST nvmf_discovery_remove_ifc 00:16:25.644 ************************************ 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:25.644 * Looking for test storage... 00:16:25.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:16:25.644 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.645 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:25.903 Cannot find device "nvmf_tgt_br" 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.903 Cannot find device "nvmf_tgt_br2" 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:25.903 Cannot find device "nvmf_tgt_br" 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:25.903 Cannot find device "nvmf_tgt_br2" 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:25.903 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.904 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:26.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:16:26.162 00:16:26.162 --- 10.0.0.2 ping statistics --- 00:16:26.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.162 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:26.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:16:26.162 00:16:26.162 --- 10.0.0.3 ping statistics --- 00:16:26.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.162 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:26.162 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:16:26.162 00:16:26.163 --- 10.0.0.1 ping statistics --- 00:16:26.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.163 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77913 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77913 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77913 ']' 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.163 09:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 [2024-07-15 09:42:20.541191] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:26.163 [2024-07-15 09:42:20.541293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.421 [2024-07-15 09:42:20.678025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.421 [2024-07-15 09:42:20.849431] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.421 [2024-07-15 09:42:20.849583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.422 [2024-07-15 09:42:20.849611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.422 [2024-07-15 09:42:20.849622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.422 [2024-07-15 09:42:20.849632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.422 [2024-07-15 09:42:20.849677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.680 [2024-07-15 09:42:20.933560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.246 [2024-07-15 09:42:21.605629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.246 [2024-07-15 09:42:21.613734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:27.246 null0 00:16:27.246 [2024-07-15 09:42:21.645756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77945 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77945 /tmp/host.sock 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77945 ']' 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:27.246 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.246 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:27.247 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:27.247 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.247 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.247 09:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:27.505 [2024-07-15 09:42:21.728368] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:27.505 [2024-07-15 09:42:21.728455] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77945 ] 00:16:27.505 [2024-07-15 09:42:21.871317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.763 [2024-07-15 09:42:21.991319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.329 [2024-07-15 09:42:22.751332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.329 09:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.702 [2024-07-15 09:42:23.799107] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:29.702 [2024-07-15 09:42:23.799158] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:29.702 [2024-07-15 09:42:23.799181] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:29.702 [2024-07-15 09:42:23.805161] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:29.702 [2024-07-15 09:42:23.862741] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:29.702 [2024-07-15 09:42:23.862862] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:29.702 [2024-07-15 09:42:23.862939] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:29.702 [2024-07-15 09:42:23.862983] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:29.702 [2024-07-15 09:42:23.863027] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.702 [2024-07-15 09:42:23.868025] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x103dde0 was disconnected and freed. delete nvme_qpair. 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.702 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:29.703 09:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:30.636 09:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:30.636 09:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.006 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.006 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.006 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.006 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.006 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.007 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.007 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.007 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.007 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.007 09:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.981 09:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.913 09:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.843 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.843 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.843 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.843 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:34.844 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.844 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.844 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.844 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.844 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:34.844 09:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.844 [2024-07-15 09:42:29.290181] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:34.844 [2024-07-15 09:42:29.290260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.844 [2024-07-15 09:42:29.290281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.844 [2024-07-15 09:42:29.290296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.844 [2024-07-15 09:42:29.290307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.844 [2024-07-15 09:42:29.290319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.844 [2024-07-15 09:42:29.290330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.844 [2024-07-15 09:42:29.290343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.844 [2024-07-15 09:42:29.290354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.844 [2024-07-15 09:42:29.290366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.844 [2024-07-15 09:42:29.290377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.844 [2024-07-15 09:42:29.290388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa3ac0 is same with the state(5) to be set 00:16:34.844 [2024-07-15 09:42:29.300176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa3ac0 (9): Bad file descriptor 00:16:34.844 [2024-07-15 09:42:29.310201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.214 [2024-07-15 09:42:30.354012] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:36.214 [2024-07-15 09:42:30.354653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa3ac0 with addr=10.0.0.2, port=4420 00:16:36.214 [2024-07-15 09:42:30.354744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa3ac0 is same with the state(5) to be set 00:16:36.214 [2024-07-15 09:42:30.354871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa3ac0 (9): Bad file descriptor 00:16:36.214 [2024-07-15 09:42:30.356081] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:36.214 [2024-07-15 09:42:30.356238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:36.214 [2024-07-15 09:42:30.356318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:36.214 [2024-07-15 09:42:30.356375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:36.214 [2024-07-15 09:42:30.356501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:36.214 [2024-07-15 09:42:30.356566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:36.214 09:42:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:37.147 [2024-07-15 09:42:31.356685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:37.147 [2024-07-15 09:42:31.356773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:37.147 [2024-07-15 09:42:31.356791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:37.147 [2024-07-15 09:42:31.356803] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:37.147 [2024-07-15 09:42:31.356835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:37.147 [2024-07-15 09:42:31.356871] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:37.147 [2024-07-15 09:42:31.356967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.147 [2024-07-15 09:42:31.356989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.147 [2024-07-15 09:42:31.357005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.147 [2024-07-15 09:42:31.357017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.147 [2024-07-15 09:42:31.357029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.147 [2024-07-15 09:42:31.357039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.147 [2024-07-15 09:42:31.357071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.147 [2024-07-15 09:42:31.357084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.147 [2024-07-15 09:42:31.357098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.147 [2024-07-15 09:42:31.357109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.147 [2024-07-15 09:42:31.357120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:37.147 [2024-07-15 09:42:31.357382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa7860 (9): Bad file descriptor 00:16:37.147 [2024-07-15 09:42:31.358393] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:37.147 [2024-07-15 09:42:31.358412] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:37.147 09:42:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.080 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.338 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:38.338 09:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.904 [2024-07-15 09:42:33.363789] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:38.904 [2024-07-15 09:42:33.363846] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:38.904 [2024-07-15 09:42:33.363869] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:38.904 [2024-07-15 09:42:33.369836] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:39.162 [2024-07-15 09:42:33.426366] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:39.162 [2024-07-15 09:42:33.426435] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:39.162 [2024-07-15 09:42:33.426466] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:39.162 [2024-07-15 09:42:33.426487] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:39.162 [2024-07-15 09:42:33.426498] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:39.162 [2024-07-15 09:42:33.432464] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x104ad90 was disconnected and freed. delete nvme_qpair. 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:39.162 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77945 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77945 ']' 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77945 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77945 00:16:39.420 killing process with pid 77945 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77945' 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77945 00:16:39.420 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77945 00:16:39.678 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:39.678 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.678 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:39.678 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.678 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:39.678 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.678 09:42:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.678 rmmod nvme_tcp 00:16:39.678 rmmod nvme_fabrics 00:16:39.678 rmmod nvme_keyring 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77913 ']' 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77913 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77913 ']' 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77913 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77913 00:16:39.678 killing process with pid 77913 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77913' 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77913 00:16:39.678 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77913 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:39.936 00:16:39.936 real 0m14.342s 00:16:39.936 user 0m24.755s 00:16:39.936 sys 0m2.487s 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.936 09:42:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:39.936 ************************************ 00:16:39.936 END TEST nvmf_discovery_remove_ifc 00:16:39.936 ************************************ 00:16:39.936 09:42:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:39.936 09:42:34 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:39.936 09:42:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.936 09:42:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.936 09:42:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.936 ************************************ 00:16:39.936 START TEST nvmf_identify_kernel_target 00:16:39.936 ************************************ 00:16:39.936 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:40.196 * Looking for test storage... 00:16:40.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.196 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:40.197 Cannot find device "nvmf_tgt_br" 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.197 Cannot find device "nvmf_tgt_br2" 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:40.197 Cannot find device "nvmf_tgt_br" 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:40.197 Cannot find device "nvmf_tgt_br2" 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.197 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:40.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:16:40.461 00:16:40.461 --- 10.0.0.2 ping statistics --- 00:16:40.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.461 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:40.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:40.461 00:16:40.461 --- 10.0.0.3 ping statistics --- 00:16:40.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.461 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:40.461 00:16:40.461 --- 10.0.0.1 ping statistics --- 00:16:40.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.461 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:40.461 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:40.462 09:42:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:41.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:41.029 Waiting for block devices as requested 00:16:41.029 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:41.029 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:41.288 No valid GPT data, bailing 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:41.288 No valid GPT data, bailing 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:41.288 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:41.288 No valid GPT data, bailing 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:41.547 No valid GPT data, bailing 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -a 10.0.0.1 -t tcp -s 4420 00:16:41.547 00:16:41.547 Discovery Log Number of Records 2, Generation counter 2 00:16:41.547 =====Discovery Log Entry 0====== 00:16:41.547 trtype: tcp 00:16:41.547 adrfam: ipv4 00:16:41.547 subtype: current discovery subsystem 00:16:41.547 treq: not specified, sq flow control disable supported 00:16:41.547 portid: 1 00:16:41.547 trsvcid: 4420 00:16:41.547 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:41.547 traddr: 10.0.0.1 00:16:41.547 eflags: none 00:16:41.547 sectype: none 00:16:41.547 =====Discovery Log Entry 1====== 00:16:41.547 trtype: tcp 00:16:41.547 adrfam: ipv4 00:16:41.547 subtype: nvme subsystem 00:16:41.547 treq: not specified, sq flow control disable supported 00:16:41.547 portid: 1 00:16:41.547 trsvcid: 4420 00:16:41.547 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:41.547 traddr: 10.0.0.1 00:16:41.547 eflags: none 00:16:41.547 sectype: none 00:16:41.547 09:42:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:41.547 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:41.808 ===================================================== 00:16:41.808 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:41.808 ===================================================== 00:16:41.808 Controller Capabilities/Features 00:16:41.808 ================================ 00:16:41.808 Vendor ID: 0000 00:16:41.808 Subsystem Vendor ID: 0000 00:16:41.808 Serial Number: f69d7c78f6eda0c7024c 00:16:41.808 Model Number: Linux 00:16:41.808 Firmware Version: 6.7.0-68 00:16:41.808 Recommended Arb Burst: 0 00:16:41.808 IEEE OUI Identifier: 00 00 00 00:16:41.808 Multi-path I/O 00:16:41.808 May have multiple subsystem ports: No 00:16:41.808 May have multiple controllers: No 00:16:41.808 Associated with SR-IOV VF: No 00:16:41.808 Max Data Transfer Size: Unlimited 00:16:41.808 Max Number of Namespaces: 0 00:16:41.808 Max Number of I/O Queues: 1024 00:16:41.808 NVMe Specification Version (VS): 1.3 00:16:41.808 NVMe Specification Version (Identify): 1.3 00:16:41.808 Maximum Queue Entries: 1024 00:16:41.808 Contiguous Queues Required: No 00:16:41.808 Arbitration Mechanisms Supported 00:16:41.808 Weighted Round Robin: Not Supported 00:16:41.808 Vendor Specific: Not Supported 00:16:41.808 Reset Timeout: 7500 ms 00:16:41.808 Doorbell Stride: 4 bytes 00:16:41.808 NVM Subsystem Reset: Not Supported 00:16:41.808 Command Sets Supported 00:16:41.808 NVM Command Set: Supported 00:16:41.808 Boot Partition: Not Supported 00:16:41.808 Memory Page Size Minimum: 4096 bytes 00:16:41.808 Memory Page Size Maximum: 4096 bytes 00:16:41.808 Persistent Memory Region: Not Supported 00:16:41.808 Optional Asynchronous Events Supported 00:16:41.808 Namespace Attribute Notices: Not Supported 00:16:41.808 Firmware Activation Notices: Not Supported 00:16:41.808 ANA Change Notices: Not Supported 00:16:41.808 PLE Aggregate Log Change Notices: Not Supported 00:16:41.808 LBA Status Info Alert Notices: Not Supported 00:16:41.808 EGE Aggregate Log Change Notices: Not Supported 00:16:41.808 Normal NVM Subsystem Shutdown event: Not Supported 00:16:41.808 Zone Descriptor Change Notices: Not Supported 00:16:41.808 Discovery Log Change Notices: Supported 00:16:41.808 Controller Attributes 00:16:41.808 128-bit Host Identifier: Not Supported 00:16:41.808 Non-Operational Permissive Mode: Not Supported 00:16:41.808 NVM Sets: Not Supported 00:16:41.808 Read Recovery Levels: Not Supported 00:16:41.808 Endurance Groups: Not Supported 00:16:41.808 Predictable Latency Mode: Not Supported 00:16:41.808 Traffic Based Keep ALive: Not Supported 00:16:41.808 Namespace Granularity: Not Supported 00:16:41.808 SQ Associations: Not Supported 00:16:41.808 UUID List: Not Supported 00:16:41.808 Multi-Domain Subsystem: Not Supported 00:16:41.808 Fixed Capacity Management: Not Supported 00:16:41.808 Variable Capacity Management: Not Supported 00:16:41.808 Delete Endurance Group: Not Supported 00:16:41.808 Delete NVM Set: Not Supported 00:16:41.808 Extended LBA Formats Supported: Not Supported 00:16:41.808 Flexible Data Placement Supported: Not Supported 00:16:41.808 00:16:41.808 Controller Memory Buffer Support 00:16:41.808 ================================ 00:16:41.808 Supported: No 00:16:41.808 00:16:41.808 Persistent Memory Region Support 00:16:41.808 ================================ 00:16:41.808 Supported: No 00:16:41.808 00:16:41.808 Admin Command Set Attributes 00:16:41.808 ============================ 00:16:41.808 Security Send/Receive: Not Supported 00:16:41.808 Format NVM: Not Supported 00:16:41.808 Firmware Activate/Download: Not Supported 00:16:41.808 Namespace Management: Not Supported 00:16:41.808 Device Self-Test: Not Supported 00:16:41.808 Directives: Not Supported 00:16:41.808 NVMe-MI: Not Supported 00:16:41.808 Virtualization Management: Not Supported 00:16:41.808 Doorbell Buffer Config: Not Supported 00:16:41.808 Get LBA Status Capability: Not Supported 00:16:41.808 Command & Feature Lockdown Capability: Not Supported 00:16:41.808 Abort Command Limit: 1 00:16:41.808 Async Event Request Limit: 1 00:16:41.808 Number of Firmware Slots: N/A 00:16:41.808 Firmware Slot 1 Read-Only: N/A 00:16:41.808 Firmware Activation Without Reset: N/A 00:16:41.808 Multiple Update Detection Support: N/A 00:16:41.808 Firmware Update Granularity: No Information Provided 00:16:41.808 Per-Namespace SMART Log: No 00:16:41.808 Asymmetric Namespace Access Log Page: Not Supported 00:16:41.808 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:41.808 Command Effects Log Page: Not Supported 00:16:41.808 Get Log Page Extended Data: Supported 00:16:41.808 Telemetry Log Pages: Not Supported 00:16:41.808 Persistent Event Log Pages: Not Supported 00:16:41.808 Supported Log Pages Log Page: May Support 00:16:41.808 Commands Supported & Effects Log Page: Not Supported 00:16:41.808 Feature Identifiers & Effects Log Page:May Support 00:16:41.808 NVMe-MI Commands & Effects Log Page: May Support 00:16:41.808 Data Area 4 for Telemetry Log: Not Supported 00:16:41.808 Error Log Page Entries Supported: 1 00:16:41.808 Keep Alive: Not Supported 00:16:41.808 00:16:41.808 NVM Command Set Attributes 00:16:41.808 ========================== 00:16:41.808 Submission Queue Entry Size 00:16:41.808 Max: 1 00:16:41.808 Min: 1 00:16:41.808 Completion Queue Entry Size 00:16:41.808 Max: 1 00:16:41.808 Min: 1 00:16:41.808 Number of Namespaces: 0 00:16:41.808 Compare Command: Not Supported 00:16:41.808 Write Uncorrectable Command: Not Supported 00:16:41.808 Dataset Management Command: Not Supported 00:16:41.808 Write Zeroes Command: Not Supported 00:16:41.808 Set Features Save Field: Not Supported 00:16:41.808 Reservations: Not Supported 00:16:41.808 Timestamp: Not Supported 00:16:41.808 Copy: Not Supported 00:16:41.808 Volatile Write Cache: Not Present 00:16:41.808 Atomic Write Unit (Normal): 1 00:16:41.808 Atomic Write Unit (PFail): 1 00:16:41.808 Atomic Compare & Write Unit: 1 00:16:41.808 Fused Compare & Write: Not Supported 00:16:41.808 Scatter-Gather List 00:16:41.808 SGL Command Set: Supported 00:16:41.808 SGL Keyed: Not Supported 00:16:41.808 SGL Bit Bucket Descriptor: Not Supported 00:16:41.808 SGL Metadata Pointer: Not Supported 00:16:41.808 Oversized SGL: Not Supported 00:16:41.808 SGL Metadata Address: Not Supported 00:16:41.808 SGL Offset: Supported 00:16:41.808 Transport SGL Data Block: Not Supported 00:16:41.808 Replay Protected Memory Block: Not Supported 00:16:41.808 00:16:41.808 Firmware Slot Information 00:16:41.808 ========================= 00:16:41.808 Active slot: 0 00:16:41.808 00:16:41.808 00:16:41.808 Error Log 00:16:41.808 ========= 00:16:41.808 00:16:41.808 Active Namespaces 00:16:41.808 ================= 00:16:41.808 Discovery Log Page 00:16:41.808 ================== 00:16:41.808 Generation Counter: 2 00:16:41.808 Number of Records: 2 00:16:41.808 Record Format: 0 00:16:41.808 00:16:41.808 Discovery Log Entry 0 00:16:41.808 ---------------------- 00:16:41.808 Transport Type: 3 (TCP) 00:16:41.808 Address Family: 1 (IPv4) 00:16:41.808 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:41.808 Entry Flags: 00:16:41.808 Duplicate Returned Information: 0 00:16:41.808 Explicit Persistent Connection Support for Discovery: 0 00:16:41.808 Transport Requirements: 00:16:41.808 Secure Channel: Not Specified 00:16:41.808 Port ID: 1 (0x0001) 00:16:41.808 Controller ID: 65535 (0xffff) 00:16:41.808 Admin Max SQ Size: 32 00:16:41.808 Transport Service Identifier: 4420 00:16:41.808 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:41.808 Transport Address: 10.0.0.1 00:16:41.808 Discovery Log Entry 1 00:16:41.808 ---------------------- 00:16:41.808 Transport Type: 3 (TCP) 00:16:41.808 Address Family: 1 (IPv4) 00:16:41.808 Subsystem Type: 2 (NVM Subsystem) 00:16:41.808 Entry Flags: 00:16:41.808 Duplicate Returned Information: 0 00:16:41.808 Explicit Persistent Connection Support for Discovery: 0 00:16:41.808 Transport Requirements: 00:16:41.808 Secure Channel: Not Specified 00:16:41.808 Port ID: 1 (0x0001) 00:16:41.808 Controller ID: 65535 (0xffff) 00:16:41.808 Admin Max SQ Size: 32 00:16:41.808 Transport Service Identifier: 4420 00:16:41.808 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:41.808 Transport Address: 10.0.0.1 00:16:41.808 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:41.808 get_feature(0x01) failed 00:16:41.808 get_feature(0x02) failed 00:16:41.809 get_feature(0x04) failed 00:16:41.809 ===================================================== 00:16:41.809 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:41.809 ===================================================== 00:16:41.809 Controller Capabilities/Features 00:16:41.809 ================================ 00:16:41.809 Vendor ID: 0000 00:16:41.809 Subsystem Vendor ID: 0000 00:16:41.809 Serial Number: d558a740e08da5f71715 00:16:41.809 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:41.809 Firmware Version: 6.7.0-68 00:16:41.809 Recommended Arb Burst: 6 00:16:41.809 IEEE OUI Identifier: 00 00 00 00:16:41.809 Multi-path I/O 00:16:41.809 May have multiple subsystem ports: Yes 00:16:41.809 May have multiple controllers: Yes 00:16:41.809 Associated with SR-IOV VF: No 00:16:41.809 Max Data Transfer Size: Unlimited 00:16:41.809 Max Number of Namespaces: 1024 00:16:41.809 Max Number of I/O Queues: 128 00:16:41.809 NVMe Specification Version (VS): 1.3 00:16:41.809 NVMe Specification Version (Identify): 1.3 00:16:41.809 Maximum Queue Entries: 1024 00:16:41.809 Contiguous Queues Required: No 00:16:41.809 Arbitration Mechanisms Supported 00:16:41.809 Weighted Round Robin: Not Supported 00:16:41.809 Vendor Specific: Not Supported 00:16:41.809 Reset Timeout: 7500 ms 00:16:41.809 Doorbell Stride: 4 bytes 00:16:41.809 NVM Subsystem Reset: Not Supported 00:16:41.809 Command Sets Supported 00:16:41.809 NVM Command Set: Supported 00:16:41.809 Boot Partition: Not Supported 00:16:41.809 Memory Page Size Minimum: 4096 bytes 00:16:41.809 Memory Page Size Maximum: 4096 bytes 00:16:41.809 Persistent Memory Region: Not Supported 00:16:41.809 Optional Asynchronous Events Supported 00:16:41.809 Namespace Attribute Notices: Supported 00:16:41.809 Firmware Activation Notices: Not Supported 00:16:41.809 ANA Change Notices: Supported 00:16:41.809 PLE Aggregate Log Change Notices: Not Supported 00:16:41.809 LBA Status Info Alert Notices: Not Supported 00:16:41.809 EGE Aggregate Log Change Notices: Not Supported 00:16:41.809 Normal NVM Subsystem Shutdown event: Not Supported 00:16:41.809 Zone Descriptor Change Notices: Not Supported 00:16:41.809 Discovery Log Change Notices: Not Supported 00:16:41.809 Controller Attributes 00:16:41.809 128-bit Host Identifier: Supported 00:16:41.809 Non-Operational Permissive Mode: Not Supported 00:16:41.809 NVM Sets: Not Supported 00:16:41.809 Read Recovery Levels: Not Supported 00:16:41.809 Endurance Groups: Not Supported 00:16:41.809 Predictable Latency Mode: Not Supported 00:16:41.809 Traffic Based Keep ALive: Supported 00:16:41.809 Namespace Granularity: Not Supported 00:16:41.809 SQ Associations: Not Supported 00:16:41.809 UUID List: Not Supported 00:16:41.809 Multi-Domain Subsystem: Not Supported 00:16:41.809 Fixed Capacity Management: Not Supported 00:16:41.809 Variable Capacity Management: Not Supported 00:16:41.809 Delete Endurance Group: Not Supported 00:16:41.809 Delete NVM Set: Not Supported 00:16:41.809 Extended LBA Formats Supported: Not Supported 00:16:41.809 Flexible Data Placement Supported: Not Supported 00:16:41.809 00:16:41.809 Controller Memory Buffer Support 00:16:41.809 ================================ 00:16:41.809 Supported: No 00:16:41.809 00:16:41.809 Persistent Memory Region Support 00:16:41.809 ================================ 00:16:41.809 Supported: No 00:16:41.809 00:16:41.809 Admin Command Set Attributes 00:16:41.809 ============================ 00:16:41.809 Security Send/Receive: Not Supported 00:16:41.809 Format NVM: Not Supported 00:16:41.809 Firmware Activate/Download: Not Supported 00:16:41.809 Namespace Management: Not Supported 00:16:41.809 Device Self-Test: Not Supported 00:16:41.809 Directives: Not Supported 00:16:41.809 NVMe-MI: Not Supported 00:16:41.809 Virtualization Management: Not Supported 00:16:41.809 Doorbell Buffer Config: Not Supported 00:16:41.809 Get LBA Status Capability: Not Supported 00:16:41.809 Command & Feature Lockdown Capability: Not Supported 00:16:41.809 Abort Command Limit: 4 00:16:41.809 Async Event Request Limit: 4 00:16:41.809 Number of Firmware Slots: N/A 00:16:41.809 Firmware Slot 1 Read-Only: N/A 00:16:41.809 Firmware Activation Without Reset: N/A 00:16:41.809 Multiple Update Detection Support: N/A 00:16:41.809 Firmware Update Granularity: No Information Provided 00:16:41.809 Per-Namespace SMART Log: Yes 00:16:41.809 Asymmetric Namespace Access Log Page: Supported 00:16:41.809 ANA Transition Time : 10 sec 00:16:41.809 00:16:41.809 Asymmetric Namespace Access Capabilities 00:16:41.809 ANA Optimized State : Supported 00:16:41.809 ANA Non-Optimized State : Supported 00:16:41.809 ANA Inaccessible State : Supported 00:16:41.809 ANA Persistent Loss State : Supported 00:16:41.809 ANA Change State : Supported 00:16:41.809 ANAGRPID is not changed : No 00:16:41.809 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:41.809 00:16:41.809 ANA Group Identifier Maximum : 128 00:16:41.809 Number of ANA Group Identifiers : 128 00:16:41.809 Max Number of Allowed Namespaces : 1024 00:16:41.809 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:41.809 Command Effects Log Page: Supported 00:16:41.809 Get Log Page Extended Data: Supported 00:16:41.809 Telemetry Log Pages: Not Supported 00:16:41.809 Persistent Event Log Pages: Not Supported 00:16:41.809 Supported Log Pages Log Page: May Support 00:16:41.809 Commands Supported & Effects Log Page: Not Supported 00:16:41.809 Feature Identifiers & Effects Log Page:May Support 00:16:41.809 NVMe-MI Commands & Effects Log Page: May Support 00:16:41.809 Data Area 4 for Telemetry Log: Not Supported 00:16:41.809 Error Log Page Entries Supported: 128 00:16:41.809 Keep Alive: Supported 00:16:41.809 Keep Alive Granularity: 1000 ms 00:16:41.809 00:16:41.809 NVM Command Set Attributes 00:16:41.809 ========================== 00:16:41.809 Submission Queue Entry Size 00:16:41.809 Max: 64 00:16:41.809 Min: 64 00:16:41.809 Completion Queue Entry Size 00:16:41.809 Max: 16 00:16:41.809 Min: 16 00:16:41.809 Number of Namespaces: 1024 00:16:41.809 Compare Command: Not Supported 00:16:41.809 Write Uncorrectable Command: Not Supported 00:16:41.809 Dataset Management Command: Supported 00:16:41.809 Write Zeroes Command: Supported 00:16:41.809 Set Features Save Field: Not Supported 00:16:41.809 Reservations: Not Supported 00:16:41.809 Timestamp: Not Supported 00:16:41.809 Copy: Not Supported 00:16:41.809 Volatile Write Cache: Present 00:16:41.809 Atomic Write Unit (Normal): 1 00:16:41.809 Atomic Write Unit (PFail): 1 00:16:41.809 Atomic Compare & Write Unit: 1 00:16:41.809 Fused Compare & Write: Not Supported 00:16:41.809 Scatter-Gather List 00:16:41.809 SGL Command Set: Supported 00:16:41.809 SGL Keyed: Not Supported 00:16:41.809 SGL Bit Bucket Descriptor: Not Supported 00:16:41.809 SGL Metadata Pointer: Not Supported 00:16:41.809 Oversized SGL: Not Supported 00:16:41.809 SGL Metadata Address: Not Supported 00:16:41.809 SGL Offset: Supported 00:16:41.809 Transport SGL Data Block: Not Supported 00:16:41.809 Replay Protected Memory Block: Not Supported 00:16:41.809 00:16:41.809 Firmware Slot Information 00:16:41.809 ========================= 00:16:41.809 Active slot: 0 00:16:41.809 00:16:41.809 Asymmetric Namespace Access 00:16:41.809 =========================== 00:16:41.809 Change Count : 0 00:16:41.809 Number of ANA Group Descriptors : 1 00:16:41.809 ANA Group Descriptor : 0 00:16:41.809 ANA Group ID : 1 00:16:41.809 Number of NSID Values : 1 00:16:41.809 Change Count : 0 00:16:41.809 ANA State : 1 00:16:41.809 Namespace Identifier : 1 00:16:41.809 00:16:41.809 Commands Supported and Effects 00:16:41.809 ============================== 00:16:41.809 Admin Commands 00:16:41.809 -------------- 00:16:41.809 Get Log Page (02h): Supported 00:16:41.809 Identify (06h): Supported 00:16:41.809 Abort (08h): Supported 00:16:41.809 Set Features (09h): Supported 00:16:41.809 Get Features (0Ah): Supported 00:16:41.809 Asynchronous Event Request (0Ch): Supported 00:16:41.809 Keep Alive (18h): Supported 00:16:41.809 I/O Commands 00:16:41.809 ------------ 00:16:41.809 Flush (00h): Supported 00:16:41.809 Write (01h): Supported LBA-Change 00:16:41.809 Read (02h): Supported 00:16:41.809 Write Zeroes (08h): Supported LBA-Change 00:16:41.809 Dataset Management (09h): Supported 00:16:41.809 00:16:41.809 Error Log 00:16:41.809 ========= 00:16:41.809 Entry: 0 00:16:41.809 Error Count: 0x3 00:16:41.809 Submission Queue Id: 0x0 00:16:41.809 Command Id: 0x5 00:16:41.809 Phase Bit: 0 00:16:41.809 Status Code: 0x2 00:16:41.809 Status Code Type: 0x0 00:16:41.809 Do Not Retry: 1 00:16:41.809 Error Location: 0x28 00:16:41.809 LBA: 0x0 00:16:41.809 Namespace: 0x0 00:16:41.809 Vendor Log Page: 0x0 00:16:41.809 ----------- 00:16:41.809 Entry: 1 00:16:41.809 Error Count: 0x2 00:16:41.809 Submission Queue Id: 0x0 00:16:41.809 Command Id: 0x5 00:16:41.809 Phase Bit: 0 00:16:41.809 Status Code: 0x2 00:16:41.809 Status Code Type: 0x0 00:16:41.809 Do Not Retry: 1 00:16:41.809 Error Location: 0x28 00:16:41.809 LBA: 0x0 00:16:41.809 Namespace: 0x0 00:16:41.809 Vendor Log Page: 0x0 00:16:41.809 ----------- 00:16:41.809 Entry: 2 00:16:41.809 Error Count: 0x1 00:16:41.810 Submission Queue Id: 0x0 00:16:41.810 Command Id: 0x4 00:16:41.810 Phase Bit: 0 00:16:41.810 Status Code: 0x2 00:16:41.810 Status Code Type: 0x0 00:16:41.810 Do Not Retry: 1 00:16:41.810 Error Location: 0x28 00:16:41.810 LBA: 0x0 00:16:41.810 Namespace: 0x0 00:16:41.810 Vendor Log Page: 0x0 00:16:41.810 00:16:41.810 Number of Queues 00:16:41.810 ================ 00:16:41.810 Number of I/O Submission Queues: 128 00:16:41.810 Number of I/O Completion Queues: 128 00:16:41.810 00:16:41.810 ZNS Specific Controller Data 00:16:41.810 ============================ 00:16:41.810 Zone Append Size Limit: 0 00:16:41.810 00:16:41.810 00:16:41.810 Active Namespaces 00:16:41.810 ================= 00:16:41.810 get_feature(0x05) failed 00:16:41.810 Namespace ID:1 00:16:41.810 Command Set Identifier: NVM (00h) 00:16:41.810 Deallocate: Supported 00:16:41.810 Deallocated/Unwritten Error: Not Supported 00:16:41.810 Deallocated Read Value: Unknown 00:16:41.810 Deallocate in Write Zeroes: Not Supported 00:16:41.810 Deallocated Guard Field: 0xFFFF 00:16:41.810 Flush: Supported 00:16:41.810 Reservation: Not Supported 00:16:41.810 Namespace Sharing Capabilities: Multiple Controllers 00:16:41.810 Size (in LBAs): 1310720 (5GiB) 00:16:41.810 Capacity (in LBAs): 1310720 (5GiB) 00:16:41.810 Utilization (in LBAs): 1310720 (5GiB) 00:16:41.810 UUID: 3c548b8c-5f77-40f2-8f53-59fab5af4fc8 00:16:41.810 Thin Provisioning: Not Supported 00:16:41.810 Per-NS Atomic Units: Yes 00:16:41.810 Atomic Boundary Size (Normal): 0 00:16:41.810 Atomic Boundary Size (PFail): 0 00:16:41.810 Atomic Boundary Offset: 0 00:16:41.810 NGUID/EUI64 Never Reused: No 00:16:41.810 ANA group ID: 1 00:16:41.810 Namespace Write Protected: No 00:16:41.810 Number of LBA Formats: 1 00:16:41.810 Current LBA Format: LBA Format #00 00:16:41.810 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:41.810 00:16:41.810 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:41.810 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.810 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.068 rmmod nvme_tcp 00:16:42.068 rmmod nvme_fabrics 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.068 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:42.069 09:42:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:43.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:43.004 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:43.004 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:43.004 00:16:43.004 real 0m2.931s 00:16:43.004 user 0m1.001s 00:16:43.004 sys 0m1.386s 00:16:43.004 09:42:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.004 09:42:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.004 ************************************ 00:16:43.004 END TEST nvmf_identify_kernel_target 00:16:43.004 ************************************ 00:16:43.004 09:42:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.004 09:42:37 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:43.004 09:42:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.004 09:42:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.004 09:42:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.004 ************************************ 00:16:43.004 START TEST nvmf_auth_host 00:16:43.004 ************************************ 00:16:43.004 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:43.004 * Looking for test storage... 00:16:43.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.004 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.004 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:43.263 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:43.264 Cannot find device "nvmf_tgt_br" 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.264 Cannot find device "nvmf_tgt_br2" 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:43.264 Cannot find device "nvmf_tgt_br" 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:43.264 Cannot find device "nvmf_tgt_br2" 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:43.264 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:43.522 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:43.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:43.523 00:16:43.523 --- 10.0.0.2 ping statistics --- 00:16:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.523 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:43.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:43.523 00:16:43.523 --- 10.0.0.3 ping statistics --- 00:16:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.523 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:43.523 00:16:43.523 --- 10.0.0.1 ping statistics --- 00:16:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.523 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78827 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78827 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78827 ']' 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.523 09:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.902 09:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.902 09:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:44.902 09:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.902 09:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.902 09:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1759453a49c7be4ab0053ab9f9d7cc2d 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.iWs 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1759453a49c7be4ab0053ab9f9d7cc2d 0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1759453a49c7be4ab0053ab9f9d7cc2d 0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1759453a49c7be4ab0053ab9f9d7cc2d 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.iWs 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.iWs 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.iWs 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f4f46b144e2665c79c6c694fee1055851d7ebee64678c5510f21a108460b4cbf 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rYt 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f4f46b144e2665c79c6c694fee1055851d7ebee64678c5510f21a108460b4cbf 3 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f4f46b144e2665c79c6c694fee1055851d7ebee64678c5510f21a108460b4cbf 3 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f4f46b144e2665c79c6c694fee1055851d7ebee64678c5510f21a108460b4cbf 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rYt 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rYt 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.rYt 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a9217165c4f1c1d226d0eefb155f2b1a4398b1ddca7aab59 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mVm 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a9217165c4f1c1d226d0eefb155f2b1a4398b1ddca7aab59 0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a9217165c4f1c1d226d0eefb155f2b1a4398b1ddca7aab59 0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a9217165c4f1c1d226d0eefb155f2b1a4398b1ddca7aab59 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mVm 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mVm 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mVm 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d7b4c044c2fbedec633a0f186f0e1e6c646153527d84040f 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0V0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d7b4c044c2fbedec633a0f186f0e1e6c646153527d84040f 2 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d7b4c044c2fbedec633a0f186f0e1e6c646153527d84040f 2 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d7b4c044c2fbedec633a0f186f0e1e6c646153527d84040f 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0V0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0V0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0V0 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e500fc103b1a157b9d793b1c2dd23ca4 00:16:44.902 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T5N 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e500fc103b1a157b9d793b1c2dd23ca4 1 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e500fc103b1a157b9d793b1c2dd23ca4 1 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e500fc103b1a157b9d793b1c2dd23ca4 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T5N 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T5N 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.T5N 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d2b01a5ff7ce9ba9e3d42d71e55c9036 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:44.903 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t3n 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d2b01a5ff7ce9ba9e3d42d71e55c9036 1 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d2b01a5ff7ce9ba9e3d42d71e55c9036 1 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d2b01a5ff7ce9ba9e3d42d71e55c9036 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t3n 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t3n 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.t3n 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d9216036f4029bfd3287e0808a117d9f07c1281b833869e6 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RNP 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d9216036f4029bfd3287e0808a117d9f07c1281b833869e6 2 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d9216036f4029bfd3287e0808a117d9f07c1281b833869e6 2 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d9216036f4029bfd3287e0808a117d9f07c1281b833869e6 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RNP 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RNP 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RNP 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:45.161 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ddb21d9faeb79af2afbce2594832ae33 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MXv 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ddb21d9faeb79af2afbce2594832ae33 0 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ddb21d9faeb79af2afbce2594832ae33 0 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ddb21d9faeb79af2afbce2594832ae33 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MXv 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MXv 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.MXv 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=918d1ef4d4091d7b97607b57bb6258cdaf82bbcd9a8d5d46d1aa24a43093830e 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9A7 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 918d1ef4d4091d7b97607b57bb6258cdaf82bbcd9a8d5d46d1aa24a43093830e 3 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 918d1ef4d4091d7b97607b57bb6258cdaf82bbcd9a8d5d46d1aa24a43093830e 3 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=918d1ef4d4091d7b97607b57bb6258cdaf82bbcd9a8d5d46d1aa24a43093830e 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:16:45.162 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9A7 00:16:45.419 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9A7 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.9A7 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78827 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78827 ']' 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.420 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iWs 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.rYt ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.rYt 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mVm 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0V0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0V0 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.T5N 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.t3n ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t3n 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RNP 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.MXv ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.MXv 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.9A7 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:45.679 09:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:45.679 09:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:45.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:46.197 Waiting for block devices as requested 00:16:46.197 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:46.197 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:46.764 No valid GPT data, bailing 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:46.764 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:47.023 No valid GPT data, bailing 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:47.023 No valid GPT data, bailing 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:47.023 No valid GPT data, bailing 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -a 10.0.0.1 -t tcp -s 4420 00:16:47.023 00:16:47.023 Discovery Log Number of Records 2, Generation counter 2 00:16:47.023 =====Discovery Log Entry 0====== 00:16:47.023 trtype: tcp 00:16:47.023 adrfam: ipv4 00:16:47.023 subtype: current discovery subsystem 00:16:47.023 treq: not specified, sq flow control disable supported 00:16:47.023 portid: 1 00:16:47.023 trsvcid: 4420 00:16:47.023 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:47.023 traddr: 10.0.0.1 00:16:47.023 eflags: none 00:16:47.023 sectype: none 00:16:47.023 =====Discovery Log Entry 1====== 00:16:47.023 trtype: tcp 00:16:47.023 adrfam: ipv4 00:16:47.023 subtype: nvme subsystem 00:16:47.023 treq: not specified, sq flow control disable supported 00:16:47.023 portid: 1 00:16:47.023 trsvcid: 4420 00:16:47.023 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:47.023 traddr: 10.0.0.1 00:16:47.023 eflags: none 00:16:47.023 sectype: none 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.023 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.282 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.283 nvme0n1 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.283 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.541 nvme0n1 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.541 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.542 09:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 nvme0n1 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 nvme0n1 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.804 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.805 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.074 nvme0n1 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.074 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.332 nvme0n1 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.332 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.589 09:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.847 nvme0n1 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.847 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.848 nvme0n1 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.848 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.106 nvme0n1 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.106 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.107 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.365 nvme0n1 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.365 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:49.366 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:49.366 09:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:49.366 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:49.366 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.366 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 nvme0n1 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:49.623 09:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.190 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.449 nvme0n1 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.449 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.707 nvme0n1 00:16:50.707 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.707 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.707 09:42:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.707 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.707 09:42:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.707 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.708 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 nvme0n1 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.967 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.226 nvme0n1 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.226 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 nvme0n1 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:51.485 09:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:53.387 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.388 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.647 nvme0n1 00:16:53.647 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.647 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.647 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.647 09:42:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.647 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.647 09:42:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.647 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.214 nvme0n1 00:16:54.214 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.214 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.214 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.214 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.215 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.474 nvme0n1 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.474 09:42:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.041 nvme0n1 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:55.041 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.042 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.300 nvme0n1 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.300 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.301 09:42:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.234 nvme0n1 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.234 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.235 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.800 nvme0n1 00:16:56.800 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.800 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.800 09:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.800 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.800 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.800 09:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.800 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.801 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.366 nvme0n1 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.366 09:42:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.951 nvme0n1 00:16:57.951 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.951 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.951 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.951 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.951 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.951 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.209 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.210 09:42:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.826 nvme0n1 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.826 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 nvme0n1 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.148 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.149 nvme0n1 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.149 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.408 nvme0n1 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.408 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.713 nvme0n1 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.713 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.714 09:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.714 nvme0n1 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.714 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 nvme0n1 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.972 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.973 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 nvme0n1 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 nvme0n1 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.577 nvme0n1 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.577 09:42:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.935 nvme0n1 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.935 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.936 nvme0n1 00:17:00.936 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.195 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.195 nvme0n1 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.196 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.454 nvme0n1 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.454 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.712 09:42:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.712 nvme0n1 00:17:01.712 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.712 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.712 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.712 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.712 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.712 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.970 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.970 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.970 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.971 nvme0n1 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.971 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.229 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.488 nvme0n1 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.488 09:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.054 nvme0n1 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.054 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.312 nvme0n1 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.312 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.570 09:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.828 nvme0n1 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.828 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.395 nvme0n1 00:17:04.395 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.395 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.395 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.396 09:42:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.963 nvme0n1 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.963 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.530 nvme0n1 00:17:05.530 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.530 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.530 09:42:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.530 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.530 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.530 09:42:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.789 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.790 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.357 nvme0n1 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:06.357 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.358 09:43:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 nvme0n1 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.924 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.861 nvme0n1 00:17:07.861 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.861 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.861 09:43:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.861 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.861 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.861 09:43:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.861 nvme0n1 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:07.861 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.862 nvme0n1 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.862 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 nvme0n1 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.121 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.122 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.380 nvme0n1 00:17:08.380 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.380 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.381 nvme0n1 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.381 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.640 09:43:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.640 nvme0n1 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.640 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.899 nvme0n1 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.899 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.900 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.900 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.158 nvme0n1 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.158 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.416 nvme0n1 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.416 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.417 nvme0n1 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.417 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.676 09:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.676 nvme0n1 00:17:09.676 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.676 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.676 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.676 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.676 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.676 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:09.935 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 nvme0n1 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.194 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.195 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.452 nvme0n1 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.452 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.453 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.778 nvme0n1 00:17:10.778 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.778 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.778 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.778 09:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.778 09:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.778 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 nvme0n1 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.059 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 nvme0n1 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.318 09:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.885 nvme0n1 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.885 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.886 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.144 nvme0n1 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.144 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.145 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.404 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.662 nvme0n1 00:17:12.662 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.662 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.662 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.662 09:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.662 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.662 09:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.662 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.663 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.230 nvme0n1 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTQ1M2E0OWM3YmU0YWIwMDUzYWI5ZjlkN2NjMmStCT77: 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjRmNDZiMTQ0ZTI2NjVjNzljNmM2OTRmZWUxMDU1ODUxZDdlYmVlNjQ2NzhjNTUxMGYyMWExMDg0NjBiNGNiZu/mjUE=: 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.230 09:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.231 09:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.231 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.231 09:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.797 nvme0n1 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.797 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.359 nvme0n1 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.359 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTUwMGZjMTAzYjFhMTU3YjlkNzkzYjFjMmRkMjNjYTQGRn5s: 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: ]] 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDJiMDFhNWZmN2NlOWJhOWUzZDQyZDcxZTU1YzkwMzYI1uMO: 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.615 09:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.200 nvme0n1 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDkyMTYwMzZmNDAyOWJmZDMyODdlMDgwOGExMTdkOWYwN2MxMjgxYjgzMzg2OWU2HmpWsw==: 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGRiMjFkOWZhZWI3OWFmMmFmYmNlMjU5NDgzMmFlMzOAo4Oh: 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.200 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.201 09:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.201 09:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.201 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.201 09:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 nvme0n1 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTE4ZDFlZjRkNDA5MWQ3Yjk3NjA3YjU3YmI2MjU4Y2RhZjgyYmJjZDlhOGQ1ZDQ2ZDFhYTI0YTQzMDkzODMwZSeK0Dw=: 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.766 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 nvme0n1 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTkyMTcxNjVjNGYxYzFkMjI2ZDBlZWZiMTU1ZjJiMWE0Mzk4YjFkZGNhN2FhYjU564cHgg==: 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDdiNGMwNDRjMmZiZWRlYzYzM2EwZjE4NmYwZTFlNmM2NDYxNTM1MjdkODQwNDBmEt37BQ==: 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.699 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.699 request: 00:17:16.699 { 00:17:16.699 "name": "nvme0", 00:17:16.699 "trtype": "tcp", 00:17:16.699 "traddr": "10.0.0.1", 00:17:16.699 "adrfam": "ipv4", 00:17:16.699 "trsvcid": "4420", 00:17:16.700 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.700 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.700 "prchk_reftag": false, 00:17:16.700 "prchk_guard": false, 00:17:16.700 "hdgst": false, 00:17:16.700 "ddgst": false, 00:17:16.700 "method": "bdev_nvme_attach_controller", 00:17:16.700 "req_id": 1 00:17:16.700 } 00:17:16.700 Got JSON-RPC error response 00:17:16.700 response: 00:17:16.700 { 00:17:16.700 "code": -5, 00:17:16.700 "message": "Input/output error" 00:17:16.700 } 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.700 09:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.700 request: 00:17:16.700 { 00:17:16.700 "name": "nvme0", 00:17:16.700 "trtype": "tcp", 00:17:16.700 "traddr": "10.0.0.1", 00:17:16.700 "adrfam": "ipv4", 00:17:16.700 "trsvcid": "4420", 00:17:16.700 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.700 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.700 "prchk_reftag": false, 00:17:16.700 "prchk_guard": false, 00:17:16.700 "hdgst": false, 00:17:16.700 "ddgst": false, 00:17:16.700 "dhchap_key": "key2", 00:17:16.700 "method": "bdev_nvme_attach_controller", 00:17:16.700 "req_id": 1 00:17:16.700 } 00:17:16.700 Got JSON-RPC error response 00:17:16.700 response: 00:17:16.700 { 00:17:16.700 "code": -5, 00:17:16.700 "message": "Input/output error" 00:17:16.700 } 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.700 request: 00:17:16.700 { 00:17:16.700 "name": "nvme0", 00:17:16.700 "trtype": "tcp", 00:17:16.700 "traddr": "10.0.0.1", 00:17:16.700 "adrfam": "ipv4", 00:17:16.700 "trsvcid": "4420", 00:17:16.700 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:16.700 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:16.700 "prchk_reftag": false, 00:17:16.700 "prchk_guard": false, 00:17:16.700 "hdgst": false, 00:17:16.700 "ddgst": false, 00:17:16.700 "dhchap_key": "key1", 00:17:16.700 "dhchap_ctrlr_key": "ckey2", 00:17:16.700 "method": "bdev_nvme_attach_controller", 00:17:16.700 "req_id": 1 00:17:16.700 } 00:17:16.700 Got JSON-RPC error response 00:17:16.700 response: 00:17:16.700 { 00:17:16.700 "code": -5, 00:17:16.700 "message": "Input/output error" 00:17:16.700 } 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:16.700 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:16.700 rmmod nvme_tcp 00:17:16.958 rmmod nvme_fabrics 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78827 ']' 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78827 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78827 ']' 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78827 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78827 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:16.958 killing process with pid 78827 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78827' 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78827 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78827 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.958 09:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:17.217 09:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:17.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:17.781 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.036 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:18.036 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.iWs /tmp/spdk.key-null.mVm /tmp/spdk.key-sha256.T5N /tmp/spdk.key-sha384.RNP /tmp/spdk.key-sha512.9A7 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:18.036 09:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:18.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:18.294 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:18.294 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:18.294 00:17:18.294 real 0m35.355s 00:17:18.294 user 0m31.910s 00:17:18.294 sys 0m3.821s 00:17:18.294 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.294 09:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.294 ************************************ 00:17:18.294 END TEST nvmf_auth_host 00:17:18.294 ************************************ 00:17:18.557 09:43:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:18.557 09:43:12 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:18.557 09:43:12 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:18.557 09:43:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.557 09:43:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.557 09:43:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.557 ************************************ 00:17:18.557 START TEST nvmf_digest 00:17:18.557 ************************************ 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:18.557 * Looking for test storage... 00:17:18.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.557 09:43:12 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:18.558 Cannot find device "nvmf_tgt_br" 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.558 Cannot find device "nvmf_tgt_br2" 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:18.558 Cannot find device "nvmf_tgt_br" 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:18.558 Cannot find device "nvmf_tgt_br2" 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:18.558 09:43:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:18.558 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:18.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:18.817 00:17:18.817 --- 10.0.0.2 ping statistics --- 00:17:18.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.817 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:18.817 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.817 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:18.817 00:17:18.817 --- 10.0.0.3 ping statistics --- 00:17:18.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.817 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:18.817 00:17:18.817 --- 10.0.0.1 ping statistics --- 00:17:18.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.817 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:18.817 ************************************ 00:17:18.817 START TEST nvmf_digest_clean 00:17:18.817 ************************************ 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80402 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80402 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80402 ']' 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.817 09:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.075 [2024-07-15 09:43:13.300077] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:19.075 [2024-07-15 09:43:13.300174] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.075 [2024-07-15 09:43:13.441691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.333 [2024-07-15 09:43:13.572858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.333 [2024-07-15 09:43:13.572942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.333 [2024-07-15 09:43:13.572957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.333 [2024-07-15 09:43:13.572968] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.333 [2024-07-15 09:43:13.572978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.333 [2024-07-15 09:43:13.573009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.899 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:19.899 [2024-07-15 09:43:14.327475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:20.157 null0 00:17:20.157 [2024-07-15 09:43:14.376390] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.157 [2024-07-15 09:43:14.400481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80434 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80434 /var/tmp/bperf.sock 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80434 ']' 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:20.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.157 09:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:20.157 [2024-07-15 09:43:14.455686] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:20.157 [2024-07-15 09:43:14.456126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80434 ] 00:17:20.157 [2024-07-15 09:43:14.593024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.416 [2024-07-15 09:43:14.720889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.982 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.982 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:20.982 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:20.982 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:20.982 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:21.547 [2024-07-15 09:43:15.741886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:21.547 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:21.547 09:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:21.805 nvme0n1 00:17:21.806 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:21.806 09:43:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:21.806 Running I/O for 2 seconds... 00:17:24.335 00:17:24.335 Latency(us) 00:17:24.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.335 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:24.335 nvme0n1 : 2.01 15069.64 58.87 0.00 0.00 8486.96 7804.74 25022.84 00:17:24.335 =================================================================================================================== 00:17:24.335 Total : 15069.64 58.87 0.00 0.00 8486.96 7804.74 25022.84 00:17:24.335 0 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:24.335 | select(.opcode=="crc32c") 00:17:24.335 | "\(.module_name) \(.executed)"' 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80434 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80434 ']' 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80434 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80434 00:17:24.335 killing process with pid 80434 00:17:24.335 Received shutdown signal, test time was about 2.000000 seconds 00:17:24.335 00:17:24.335 Latency(us) 00:17:24.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.335 =================================================================================================================== 00:17:24.335 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80434' 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80434 00:17:24.335 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80434 00:17:24.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80494 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80494 /var/tmp/bperf.sock 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80494 ']' 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.632 09:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:24.632 [2024-07-15 09:43:18.868038] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:24.632 [2024-07-15 09:43:18.868392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80494 ] 00:17:24.632 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:24.632 Zero copy mechanism will not be used. 00:17:24.632 [2024-07-15 09:43:19.005912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.890 [2024-07-15 09:43:19.125931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.456 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.456 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:25.456 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:25.456 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:25.456 09:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:25.714 [2024-07-15 09:43:20.113491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:25.714 09:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.714 09:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:26.280 nvme0n1 00:17:26.280 09:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:26.280 09:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:26.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:26.280 Zero copy mechanism will not be used. 00:17:26.280 Running I/O for 2 seconds... 00:17:28.182 00:17:28.182 Latency(us) 00:17:28.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.182 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:28.182 nvme0n1 : 2.00 7188.20 898.53 0.00 0.00 2222.26 1995.87 5213.09 00:17:28.182 =================================================================================================================== 00:17:28.182 Total : 7188.20 898.53 0.00 0.00 2222.26 1995.87 5213.09 00:17:28.182 0 00:17:28.182 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:28.182 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:28.182 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:28.182 | select(.opcode=="crc32c") 00:17:28.182 | "\(.module_name) \(.executed)"' 00:17:28.182 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:28.182 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80494 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80494 ']' 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80494 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80494 00:17:28.749 killing process with pid 80494 00:17:28.749 Received shutdown signal, test time was about 2.000000 seconds 00:17:28.749 00:17:28.749 Latency(us) 00:17:28.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.749 =================================================================================================================== 00:17:28.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80494' 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80494 00:17:28.749 09:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80494 00:17:28.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80559 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80559 /var/tmp/bperf.sock 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80559 ']' 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.749 09:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:29.008 [2024-07-15 09:43:23.227917] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:29.008 [2024-07-15 09:43:23.228197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80559 ] 00:17:29.008 [2024-07-15 09:43:23.361814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.008 [2024-07-15 09:43:23.474525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.943 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.943 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:29.943 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:29.943 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:29.943 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:30.202 [2024-07-15 09:43:24.487628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:30.202 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:30.202 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:30.459 nvme0n1 00:17:30.459 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:30.459 09:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:30.716 Running I/O for 2 seconds... 00:17:32.614 00:17:32.614 Latency(us) 00:17:32.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.614 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.614 nvme0n1 : 2.00 16174.59 63.18 0.00 0.00 7904.26 7268.54 16086.11 00:17:32.614 =================================================================================================================== 00:17:32.614 Total : 16174.59 63.18 0.00 0.00 7904.26 7268.54 16086.11 00:17:32.614 0 00:17:32.614 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:32.614 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:32.614 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:32.614 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:32.614 | select(.opcode=="crc32c") 00:17:32.614 | "\(.module_name) \(.executed)"' 00:17:32.614 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80559 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80559 ']' 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80559 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:32.871 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80559 00:17:32.871 killing process with pid 80559 00:17:32.871 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.871 00:17:32.871 Latency(us) 00:17:32.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.872 =================================================================================================================== 00:17:32.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.872 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:32.872 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:32.872 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80559' 00:17:32.872 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80559 00:17:32.872 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80559 00:17:33.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80615 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80615 /var/tmp/bperf.sock 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80615 ']' 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.130 09:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:33.388 [2024-07-15 09:43:27.628329] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:33.388 [2024-07-15 09:43:27.628975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80615 ] 00:17:33.388 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:33.388 Zero copy mechanism will not be used. 00:17:33.388 [2024-07-15 09:43:27.771226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.647 [2024-07-15 09:43:27.891170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.213 09:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.213 09:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:34.213 09:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:34.214 09:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:34.214 09:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:34.779 [2024-07-15 09:43:28.979319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:34.779 09:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.779 09:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:35.037 nvme0n1 00:17:35.037 09:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:35.037 09:43:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:35.295 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:35.295 Zero copy mechanism will not be used. 00:17:35.295 Running I/O for 2 seconds... 00:17:37.191 00:17:37.191 Latency(us) 00:17:37.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.191 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:37.191 nvme0n1 : 2.00 6232.07 779.01 0.00 0.00 2561.67 2427.81 6970.65 00:17:37.191 =================================================================================================================== 00:17:37.191 Total : 6232.07 779.01 0.00 0.00 2561.67 2427.81 6970.65 00:17:37.191 0 00:17:37.191 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:37.191 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:37.191 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:37.191 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:37.191 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:37.191 | select(.opcode=="crc32c") 00:17:37.191 | "\(.module_name) \(.executed)"' 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80615 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80615 ']' 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80615 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80615 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:37.450 killing process with pid 80615 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80615' 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80615 00:17:37.450 Received shutdown signal, test time was about 2.000000 seconds 00:17:37.450 00:17:37.450 Latency(us) 00:17:37.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.450 =================================================================================================================== 00:17:37.450 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.450 09:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80615 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80402 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80402 ']' 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80402 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80402 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:37.708 killing process with pid 80402 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80402' 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80402 00:17:37.708 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80402 00:17:37.967 00:17:37.967 real 0m19.116s 00:17:37.967 user 0m37.363s 00:17:37.967 sys 0m4.757s 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.967 ************************************ 00:17:37.967 END TEST nvmf_digest_clean 00:17:37.967 ************************************ 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:37.967 ************************************ 00:17:37.967 START TEST nvmf_digest_error 00:17:37.967 ************************************ 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80704 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80704 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80704 ']' 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.967 09:43:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:38.226 [2024-07-15 09:43:32.469427] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:38.226 [2024-07-15 09:43:32.469522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.226 [2024-07-15 09:43:32.605584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.484 [2024-07-15 09:43:32.718004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.484 [2024-07-15 09:43:32.718054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.484 [2024-07-15 09:43:32.718067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.484 [2024-07-15 09:43:32.718075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.484 [2024-07-15 09:43:32.718083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.484 [2024-07-15 09:43:32.718107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.048 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.048 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:39.048 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.048 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:39.048 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.305 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.306 [2024-07-15 09:43:33.526582] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.306 [2024-07-15 09:43:33.587202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:39.306 null0 00:17:39.306 [2024-07-15 09:43:33.636099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.306 [2024-07-15 09:43:33.660210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80736 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80736 /var/tmp/bperf.sock 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80736 ']' 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:39.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.306 09:43:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.306 [2024-07-15 09:43:33.722631] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:39.306 [2024-07-15 09:43:33.722740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80736 ] 00:17:39.563 [2024-07-15 09:43:33.864854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.564 [2024-07-15 09:43:33.993853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.822 [2024-07-15 09:43:34.051826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:40.393 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.393 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:40.393 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:40.393 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:40.658 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:40.658 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.658 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.658 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.658 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.658 09:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.917 nvme0n1 00:17:40.917 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:40.917 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.917 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.917 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.917 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:40.917 09:43:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:40.917 Running I/O for 2 seconds... 00:17:40.917 [2024-07-15 09:43:35.362066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:40.917 [2024-07-15 09:43:35.362150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.917 [2024-07-15 09:43:35.362167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.917 [2024-07-15 09:43:35.379846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:40.917 [2024-07-15 09:43:35.379919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.917 [2024-07-15 09:43:35.379935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.396860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.396931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.413777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.413828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.413843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.430713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.430776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.430790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.447638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.447701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.447716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.464657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.464720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.464735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.481584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.481646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.481661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.498512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.498568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.498584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.515412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.515467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.515482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.532271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.532311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.532326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.549144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.549184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.549198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.565973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.566015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.566029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.582885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.582938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.582952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.599806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.599845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.599859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.616680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.616721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.616735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.176 [2024-07-15 09:43:35.633600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.176 [2024-07-15 09:43:35.633640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.176 [2024-07-15 09:43:35.633654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.434 [2024-07-15 09:43:35.650482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.434 [2024-07-15 09:43:35.650524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.434 [2024-07-15 09:43:35.650538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.434 [2024-07-15 09:43:35.667371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.667412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.667426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.684228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.684267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.684281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.701135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.701174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.701189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.718045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.718084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.718098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.734954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.734993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.735008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.751807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.751846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.751861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.768676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.768716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.768731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.785529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.785581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.785595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.802480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.802519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.802537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.819337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.819376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.819390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.836216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.836254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.836268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.853063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.853101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.853116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.869872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.869928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.869943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.435 [2024-07-15 09:43:35.886715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.435 [2024-07-15 09:43:35.886755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.435 [2024-07-15 09:43:35.886769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:35.903545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:35.903584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:35.903598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:35.920405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:35.920443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:35.920457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:35.937214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:35.937253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:35.937267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:35.954028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:35.954070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:35.954085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:35.970971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:35.971009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:35.971023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:35.987781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:35.987820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:35.987833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.004610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.004651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.004665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.021472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.021515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.021531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.038349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.038388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.038402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.055184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.055223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.055237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.071990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.072031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.072045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.088835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.088876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.088891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.105690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.105733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.105747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.693 [2024-07-15 09:43:36.122558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.693 [2024-07-15 09:43:36.122600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.693 [2024-07-15 09:43:36.122615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.694 [2024-07-15 09:43:36.139424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.694 [2024-07-15 09:43:36.139463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.694 [2024-07-15 09:43:36.139477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.694 [2024-07-15 09:43:36.156285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.694 [2024-07-15 09:43:36.156325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.694 [2024-07-15 09:43:36.156339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.952 [2024-07-15 09:43:36.173150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.952 [2024-07-15 09:43:36.173189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.952 [2024-07-15 09:43:36.173204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.952 [2024-07-15 09:43:36.189942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.952 [2024-07-15 09:43:36.189980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.952 [2024-07-15 09:43:36.189994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.206717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.206756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.206770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.223603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.223645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.223659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.240519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.240558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.240572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.257371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.257413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.257428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.274274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.274313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.274327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.291277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.291315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.291329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.308102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.308141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.308154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.324877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.324931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.324946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.341730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.341770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.341784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.358549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.358589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.358603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.375333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.375372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.375386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.392196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.392243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.392257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.953 [2024-07-15 09:43:36.409063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:41.953 [2024-07-15 09:43:36.409102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.953 [2024-07-15 09:43:36.409116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.433223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.433262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.433277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.450043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.450081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.450095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.466872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.466924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.466939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.483779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.483823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.483838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.500650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.500687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.500701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.517504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.517550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.517564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.534310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.534351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.534366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.551138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.551178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.551192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.567934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.567973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.567987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.584955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.584995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.585009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.601821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.601861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.601875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.618752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.618796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.618810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.635574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.211 [2024-07-15 09:43:36.635614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.211 [2024-07-15 09:43:36.635628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.211 [2024-07-15 09:43:36.652429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.212 [2024-07-15 09:43:36.652471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.212 [2024-07-15 09:43:36.652486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.212 [2024-07-15 09:43:36.669307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.212 [2024-07-15 09:43:36.669360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.212 [2024-07-15 09:43:36.669382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.469 [2024-07-15 09:43:36.686146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.469 [2024-07-15 09:43:36.686185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.469 [2024-07-15 09:43:36.686199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.469 [2024-07-15 09:43:36.703001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.469 [2024-07-15 09:43:36.703050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.469 [2024-07-15 09:43:36.703064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.469 [2024-07-15 09:43:36.719845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.469 [2024-07-15 09:43:36.719887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.469 [2024-07-15 09:43:36.719917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.469 [2024-07-15 09:43:36.736685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.469 [2024-07-15 09:43:36.736744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.469 [2024-07-15 09:43:36.736760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.469 [2024-07-15 09:43:36.753709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.469 [2024-07-15 09:43:36.753757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.469 [2024-07-15 09:43:36.753772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.469 [2024-07-15 09:43:36.770687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.770742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.770758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.787544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.787597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.804451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.804503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.804519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.821376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.821417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.821436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.838274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.838313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.838328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.855123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.855164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.855187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.871964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.872010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.872025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.888825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.888863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.888877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.905677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.905724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.905738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.470 [2024-07-15 09:43:36.922470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.470 [2024-07-15 09:43:36.922510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.470 [2024-07-15 09:43:36.922524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:36.939293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:36.939331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:36.939345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:36.956120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:36.956161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:36.956175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:36.972961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:36.973021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:36.973045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:36.989760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:36.989799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:36.989813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.006611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.006663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.006678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.023432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.023471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.023485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.040189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.040227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.040242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.057024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.057063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.057077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.073872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.073924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.073939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.090648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.090692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.090706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.107553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.107599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.107613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.124382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.124420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.124434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.141219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.141257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.141270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.158124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.158175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.158189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.174983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.175021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.175035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.728 [2024-07-15 09:43:37.191873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.728 [2024-07-15 09:43:37.191929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.728 [2024-07-15 09:43:37.191944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.208719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.208757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.208771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.225579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.225616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.225630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.242516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.242554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.242569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.259420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.259458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.259472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.276261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.276300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.276314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.293207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.293255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.293270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.310047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.310085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.310099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.326938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.326975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.326989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 [2024-07-15 09:43:37.343391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7de020) 00:17:42.986 [2024-07-15 09:43:37.343445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.986 [2024-07-15 09:43:37.343460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.986 00:17:42.986 Latency(us) 00:17:42.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.986 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:42.986 nvme0n1 : 2.01 14978.16 58.51 0.00 0.00 8538.43 7983.48 32648.84 00:17:42.986 =================================================================================================================== 00:17:42.986 Total : 14978.16 58.51 0.00 0.00 8538.43 7983.48 32648.84 00:17:42.986 0 00:17:42.986 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:42.986 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:42.986 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:42.986 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:42.986 | .driver_specific 00:17:42.986 | .nvme_error 00:17:42.986 | .status_code 00:17:42.986 | .command_transient_transport_error' 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80736 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80736 ']' 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80736 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80736 00:17:43.244 killing process with pid 80736 00:17:43.244 Received shutdown signal, test time was about 2.000000 seconds 00:17:43.244 00:17:43.244 Latency(us) 00:17:43.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.244 =================================================================================================================== 00:17:43.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80736' 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80736 00:17:43.244 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80736 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80796 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80796 /var/tmp/bperf.sock 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80796 ']' 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:43.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.505 09:43:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:43.505 [2024-07-15 09:43:37.939077] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:43.505 [2024-07-15 09:43:37.939483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80796 ] 00:17:43.505 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:43.505 Zero copy mechanism will not be used. 00:17:43.763 [2024-07-15 09:43:38.075910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.763 [2024-07-15 09:43:38.188223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.020 [2024-07-15 09:43:38.242489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:44.599 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.599 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:44.599 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:44.599 09:43:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:44.889 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:44.889 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.889 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:44.889 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.889 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.889 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:45.147 nvme0n1 00:17:45.147 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:45.147 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.147 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:45.147 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.147 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:45.147 09:43:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:45.147 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:45.147 Zero copy mechanism will not be used. 00:17:45.147 Running I/O for 2 seconds... 00:17:45.147 [2024-07-15 09:43:39.594767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.147 [2024-07-15 09:43:39.594839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.147 [2024-07-15 09:43:39.594864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.147 [2024-07-15 09:43:39.599229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.147 [2024-07-15 09:43:39.599275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.147 [2024-07-15 09:43:39.599297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.147 [2024-07-15 09:43:39.603719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.147 [2024-07-15 09:43:39.603767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.147 [2024-07-15 09:43:39.603790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.147 [2024-07-15 09:43:39.608026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.147 [2024-07-15 09:43:39.608072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.147 [2024-07-15 09:43:39.608092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.147 [2024-07-15 09:43:39.612268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.147 [2024-07-15 09:43:39.612314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.147 [2024-07-15 09:43:39.612336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.616404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.616451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.616473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.620637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.620684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.620704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.625001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.625057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.625078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.629306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.629353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.629375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.633715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.633761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.633781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.638095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.638139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.638159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.642427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.642473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.642494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.646701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.646747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.646767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.651034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.651080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.651101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.655407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.655455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.655490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.659840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.659886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.659948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.407 [2024-07-15 09:43:39.664341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.407 [2024-07-15 09:43:39.664386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.407 [2024-07-15 09:43:39.664407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.668587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.668633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.668653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.672989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.673062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.673083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.677266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.677312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.677332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.681631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.681676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.681697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.686050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.686093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.686114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.690414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.690461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.690482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.694749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.694795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.694816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.699081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.699126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.699148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.703275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.703319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.703340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.707655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.707702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.707724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.712043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.712103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.712124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.716521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.716567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.716587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.720856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.720917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.720940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.725231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.725276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.725297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.729646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.729692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.729712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.734152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.734197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.734218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.738592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.738638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.738659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.743129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.743174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.743194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.747525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.747571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.747592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.751837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.751882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.751938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.756233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.756279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.756300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.760666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.760713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.760733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.764975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.765042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.765063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.769278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.769324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.769346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.773563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.773608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.773628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.777905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.777961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.777982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.782205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.782250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.782270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.408 [2024-07-15 09:43:39.786574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.408 [2024-07-15 09:43:39.786620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.408 [2024-07-15 09:43:39.786640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.790971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.791015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.791037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.795291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.795336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.795357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.799785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.799829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.799849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.804243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.804288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.804308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.808566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.808612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.808634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.812967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.813020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.813043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.817374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.817420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.817442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.821716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.821761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.821781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.826085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.826128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.826149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.830548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.830594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.830614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.835033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.835078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.835099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.839349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.839395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.839416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.843778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.843824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.843844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.848187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.848233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.848253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.852484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.852530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.852551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.856995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.857089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.857110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.861725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.861772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.861795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.866202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.866248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.866268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.409 [2024-07-15 09:43:39.870598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.409 [2024-07-15 09:43:39.870642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.409 [2024-07-15 09:43:39.870662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.875005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.875048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.875085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.879430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.879505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.879524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.883879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.883960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.883983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.888380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.888426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.888462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.892827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.892872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.892904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.897375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.897422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.897442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.901697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.901741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.901760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.906103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.906145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.906166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.910263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.910307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.910327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.914500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.914546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.914567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.918804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.918849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.918869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.923046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.923090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.923110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.927552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.927596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.927619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.932271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.932317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.932338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.937378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.937424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.937445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.941753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.941800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.941821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.946873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.946947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.946970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.952135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.952180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.952201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.957636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.957678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.957697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.962886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.962941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.962963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.968185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.968231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.968252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.973378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.973425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.973445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.978679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.978731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.978752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.984147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.984192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.984212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.989279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.989337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.989359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.994504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.672 [2024-07-15 09:43:39.994548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.672 [2024-07-15 09:43:39.994568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.672 [2024-07-15 09:43:39.999621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:39.999668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:39.999688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.004868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.004927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.004950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.010000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.010051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.010071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.015219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.015266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.015286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.020430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.020475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.020495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.025693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.025740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.025760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.030779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.030825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.030846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.035993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.036036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.036071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.041134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.041178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.041199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.045466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.045512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.045547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.049881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.049959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.049982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.054345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.054395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.054418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.058733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.058781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.058801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.063295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.063341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.063363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.067720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.067765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.067785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.072157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.072203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.072224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.076334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.076379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.076400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.080678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.080723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.080749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.085005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.085078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.085100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.089365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.089412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.089433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.093688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.093734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.093755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.098140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.098185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.098206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.102492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.102554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.102574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.106907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.106961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.106983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.111311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.111357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.111378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.115830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.115875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.115934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.120199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.120241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.120263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.124560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.124606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.124627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.673 [2024-07-15 09:43:40.128972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.673 [2024-07-15 09:43:40.129040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.673 [2024-07-15 09:43:40.129062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.674 [2024-07-15 09:43:40.133280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.674 [2024-07-15 09:43:40.133326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.674 [2024-07-15 09:43:40.133348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.137708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.137754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.137775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.142139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.142185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.142206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.146657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.146703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.146723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.151052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.151098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.151118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.155247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.155293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.155313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.159591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.159638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.159658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.163883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.163939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.163960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.168175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.168241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.168263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.172729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.172774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.172795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.177179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.177224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.177245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.181578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.181623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.181644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.185943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.186032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.186053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.190542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.190588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.190608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.195043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.195088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.195107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.199564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.199608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.199627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.204044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.204089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.204109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.208316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.208376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.208396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.212554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.212598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.212617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.217135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.217179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.217200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.221585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.221628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.221648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.226068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.226137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.226159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.230454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.934 [2024-07-15 09:43:40.230498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.934 [2024-07-15 09:43:40.230534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.934 [2024-07-15 09:43:40.234858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.234918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.234957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.239129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.239176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.239196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.243523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.243566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.243586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.247840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.247885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.247942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.252158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.252202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.252221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.256361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.256405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.256425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.260570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.260613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.260633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.265021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.265083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.265104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.269507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.269550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.269570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.273887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.273971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.274011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.278158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.278201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.278221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.282349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.282393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.282413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.286642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.286685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.286704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.291040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.291083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.291105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.295355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.295399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.295419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.299755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.299799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.299819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.304107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.304150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.304169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.308409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.308454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.308489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.312862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.312940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.312963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.317493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.317535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.317554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.322035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.322093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.322130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.326850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.326926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.326966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.331368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.331413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.331440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.335691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.335739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.335761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.340248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.340301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.340323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.344620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.344671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.344693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.349083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.349131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.349163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.353342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.353390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.935 [2024-07-15 09:43:40.353417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.935 [2024-07-15 09:43:40.357823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.935 [2024-07-15 09:43:40.357870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.357906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.362209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.362255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.362276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.366444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.366511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.370839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.370886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.370927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.375182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.375227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.375249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.379629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.379675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.379696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.384077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.384123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.384144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.388543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.388587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.388606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.392950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.393004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.393050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:45.936 [2024-07-15 09:43:40.397191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:45.936 [2024-07-15 09:43:40.397236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.936 [2024-07-15 09:43:40.397258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.195 [2024-07-15 09:43:40.401607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.195 [2024-07-15 09:43:40.401652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.195 [2024-07-15 09:43:40.401672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.195 [2024-07-15 09:43:40.405876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.195 [2024-07-15 09:43:40.405952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.195 [2024-07-15 09:43:40.405976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.195 [2024-07-15 09:43:40.410257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.195 [2024-07-15 09:43:40.410300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.195 [2024-07-15 09:43:40.410319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.195 [2024-07-15 09:43:40.414631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.195 [2024-07-15 09:43:40.414675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.414695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.418985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.419028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.419048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.423386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.423430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.423451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.427684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.427727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.427746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.432087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.432128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.432148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.436324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.436367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.436386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.440673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.440718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.440737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.445154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.445200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.445221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.449507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.449551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.449571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.453873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.453934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.453973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.458132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.458175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.458196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.462580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.462642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.462663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.467069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.467114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.467136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.471505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.471552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.471573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.475968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.476014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.476035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.480433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.480499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.480521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.484928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.484983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.485003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.489420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.489467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.489503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.493905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.493983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.494006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.498376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.498421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.498443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.502967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.503008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.503028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.507346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.507392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.507413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.511667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.511712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.511731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.516078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.196 [2024-07-15 09:43:40.516122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.196 [2024-07-15 09:43:40.516142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.196 [2024-07-15 09:43:40.520445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.520490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.520509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.524761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.524806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.524826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.529090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.529135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.529156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.533459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.533503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.533523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.537631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.537676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.537696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.541943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.541983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.542027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.546204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.546249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.546270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.550498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.550558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.550578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.554940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.554995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.555014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.559234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.559277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.559298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.563602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.563647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.563666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.568131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.568181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.568204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.572440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.572501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.572522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.576843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.576890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.576924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.581228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.581275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.581297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.585747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.585792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.585812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.590195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.590259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.594554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.594598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.594618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.599072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.599117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.599137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.603425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.603472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.603493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.607843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.607889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.607955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.612210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.612254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.612276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.616581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.616624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.197 [2024-07-15 09:43:40.616644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.197 [2024-07-15 09:43:40.620879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.197 [2024-07-15 09:43:40.620934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.620954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.625326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.625385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.625405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.629753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.629799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.629820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.634068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.634128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.634149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.638360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.638408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.638429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.642840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.642885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.642943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.647264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.647310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.651691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.651737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.651757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.656155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.656200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.656221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.198 [2024-07-15 09:43:40.660669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.198 [2024-07-15 09:43:40.660715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.198 [2024-07-15 09:43:40.660737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.458 [2024-07-15 09:43:40.665058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.458 [2024-07-15 09:43:40.665104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.458 [2024-07-15 09:43:40.665125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.458 [2024-07-15 09:43:40.669457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.669501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.669521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.673781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.673825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.673848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.677967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.678009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.678028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.682195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.682239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.682260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.686536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.686580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.686600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.691005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.691050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.691071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.695277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.695322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.695343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.699659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.699703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.699723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.704062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.704104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.704157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.708657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.708721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.713069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.713115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.713136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.717485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.717530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.717549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.721829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.721876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.721927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.726214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.726258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.726277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.730611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.730672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.730692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.735085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.735128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.735149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.739573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.739619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.739640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.744031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.744091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.744112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.748367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.748413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.748433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.752775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.752822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.752842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.757182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.757226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.757247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.761660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.761704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.761724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.766019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.766063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.766084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.770354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.770399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.770434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.774853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.774948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.774970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.779428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.779474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.779495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.783876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.459 [2024-07-15 09:43:40.783931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.459 [2024-07-15 09:43:40.783953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.459 [2024-07-15 09:43:40.788299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.788345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.788366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.792716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.792759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.792779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.797151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.797198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.797221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.801615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.801661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.801682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.805907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.805957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.805979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.810302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.810347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.810368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.814743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.814789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.814824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.819330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.819376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.819396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.823733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.823779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.823800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.827987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.828031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.828051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.832259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.832304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.832324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.836800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.836847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.836868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.841214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.841271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.841291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.845632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.845679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.845699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.849986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.850030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.850050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.854286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.854332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.854352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.858716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.858761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.858783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.863144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.863189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.863211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.867561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.867607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.867628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.871919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.871964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.871985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.876178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.876223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.876245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.880430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.880477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.880498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.884857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.884923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.884947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.889189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.889234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.889255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.893536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.893582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.893602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.898039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.898085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.460 [2024-07-15 09:43:40.898106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.460 [2024-07-15 09:43:40.902278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.460 [2024-07-15 09:43:40.902324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.461 [2024-07-15 09:43:40.902345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.461 [2024-07-15 09:43:40.906624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.461 [2024-07-15 09:43:40.906671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.461 [2024-07-15 09:43:40.906693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.461 [2024-07-15 09:43:40.910953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.461 [2024-07-15 09:43:40.910997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.461 [2024-07-15 09:43:40.911018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.461 [2024-07-15 09:43:40.915351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.461 [2024-07-15 09:43:40.915398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.461 [2024-07-15 09:43:40.915419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.461 [2024-07-15 09:43:40.919707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.461 [2024-07-15 09:43:40.919753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.461 [2024-07-15 09:43:40.919774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.924046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.924098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.924119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.928269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.928315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.928337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.932667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.932714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.932736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.937131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.937176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.937196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.941406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.941452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.946401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.946447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.946487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.950845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.950922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.950946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.955096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.955141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.955162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.959485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.959538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.959559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.963876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.963933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.963955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.968271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.968316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.968338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.972573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.972620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.972642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.976851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.976914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.976937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.981167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.981211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.981232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.985513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.720 [2024-07-15 09:43:40.985558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.720 [2024-07-15 09:43:40.985579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.720 [2024-07-15 09:43:40.989724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:40.989771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:40.989791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:40.993906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:40.993948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:40.993969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:40.998226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:40.998271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:40.998291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.002672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.002719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.002740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.007020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.007064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.007086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.011305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.011352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.011373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.015668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.015714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.015735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.020070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.020136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.024495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.024542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.024563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.028749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.028796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.028817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.033163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.033209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.033230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.037459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.037507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.037527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.041868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.041927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.041951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.046353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.046399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.046419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.050783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.050829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.050851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.055850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.055917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.055941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.060159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.060215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.060236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.065058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.065103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.065125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.070322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.070368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.070389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.075574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.075620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.075641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.080754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.080801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.080823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.085962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.086007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.086028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.091136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.091182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.091202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.096325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.096372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.096393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.101502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.101548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.101569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.106771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.106818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.106840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.111885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.111944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.111965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.117173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.117219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.117240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.122390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.122436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.721 [2024-07-15 09:43:41.122457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.721 [2024-07-15 09:43:41.127554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.721 [2024-07-15 09:43:41.127604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.127625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.132780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.132826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.132846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.137961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.138004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.138025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.143128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.143172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.143193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.147862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.147926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.147948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.152216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.152261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.152282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.156553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.156601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.156622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.160976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.161029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.161056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.165393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.165440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.165475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.169627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.169672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.169693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.174071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.174116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.174136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.178397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.178444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.178465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.722 [2024-07-15 09:43:41.183065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.722 [2024-07-15 09:43:41.183111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.722 [2024-07-15 09:43:41.183140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.187418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.187465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.187486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.191865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.191930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.191953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.196256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.196302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.196324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.200546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.200592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.200614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.204927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.204972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.204992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.209300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.209345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.209360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.213527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.213569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.213583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.217670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.217711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.217725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.221868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.221924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.221939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.226075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.226115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.226129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.230287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.230328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.230342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.234499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.234540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.234554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.238850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.238902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.238917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.243041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.243080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.243110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.247175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.247215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.247228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.251365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.251403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.251433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.255579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.982 [2024-07-15 09:43:41.255618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.982 [2024-07-15 09:43:41.255649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.982 [2024-07-15 09:43:41.259824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.259863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.259877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.264038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.264076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.264106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.268341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.268380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.268393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.272490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.272528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.272543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.276665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.276702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.276732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.280875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.280938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.280952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.285105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.285147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.285162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.289243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.289281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.289294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.293408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.293462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.293492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.297584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.297621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.297651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.301898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.301965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.301980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.306094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.306131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.306145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.310301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.310353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.310385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.314647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.314688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.314717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.318935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.318972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.319001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.323173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.323210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.323241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.327423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.327461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.327490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.331784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.331822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.331852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.335998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.336034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.336080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.340217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.340255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.340269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.344493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.344534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.344547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.348697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.348738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.348752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.352858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.352907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.352923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.357072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.357109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.357123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.361332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.361381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.361396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.365585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.365624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.365638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.369840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.369889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.369923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.374113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.374151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.374165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.378308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.378346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.378359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.382457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.382496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.382510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.386670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.386709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.386723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.390800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.390839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.983 [2024-07-15 09:43:41.390852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.983 [2024-07-15 09:43:41.395106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.983 [2024-07-15 09:43:41.395144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.395158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.399306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.399354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.399368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.403562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.403601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.403615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.407902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.407967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.407981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.412144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.412182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.412196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.416525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.416565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.416594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.420777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.420817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.420847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.425120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.425171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.425185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.429241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.429288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.429302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.433310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.433348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.433361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.437520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.437558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.437587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.441771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.441808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.441838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.984 [2024-07-15 09:43:41.446108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:46.984 [2024-07-15 09:43:41.446146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.984 [2024-07-15 09:43:41.446176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.242 [2024-07-15 09:43:41.450402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.242 [2024-07-15 09:43:41.450442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.242 [2024-07-15 09:43:41.450457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.242 [2024-07-15 09:43:41.454675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.242 [2024-07-15 09:43:41.454728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.242 [2024-07-15 09:43:41.454758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.242 [2024-07-15 09:43:41.458972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.242 [2024-07-15 09:43:41.459026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.242 [2024-07-15 09:43:41.459056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.242 [2024-07-15 09:43:41.463460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.242 [2024-07-15 09:43:41.463499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.242 [2024-07-15 09:43:41.463513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.242 [2024-07-15 09:43:41.467775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.242 [2024-07-15 09:43:41.467813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.242 [2024-07-15 09:43:41.467843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.242 [2024-07-15 09:43:41.472083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.242 [2024-07-15 09:43:41.472119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.472147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.476186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.476222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.476251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.480556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.480593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.480624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.484806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.484845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.484859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.488951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.488987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.489001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.493196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.493234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.493248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.497482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.497520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.497550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.501594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.501632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.501646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.505913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.505973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.505987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.510214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.510252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.510265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.514459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.514512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.514541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.518805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.518844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.518858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.523603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.523640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.523669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.527893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.527953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.527967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.532086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.532123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.532152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.536210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.536246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.536276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.540451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.540488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.540517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.544780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.544819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.544833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.548919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.548956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.548985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.553101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.553138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.553151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.557282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.557320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.557333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.561508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.561545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.561575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.565751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.565788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.565817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.570060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.570097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.570127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.574194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.574230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.574260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.578298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.578334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.578363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.243 [2024-07-15 09:43:41.582419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2228ac0) 00:17:47.243 [2024-07-15 09:43:41.582457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.243 [2024-07-15 09:43:41.582486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.243 00:17:47.243 Latency(us) 00:17:47.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.243 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:47.243 nvme0n1 : 2.00 6977.72 872.21 0.00 0.00 2289.49 1906.50 9651.67 00:17:47.243 =================================================================================================================== 00:17:47.243 Total : 6977.72 872.21 0.00 0.00 2289.49 1906.50 9651.67 00:17:47.243 0 00:17:47.243 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:47.243 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:47.243 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:47.244 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:47.244 | .driver_specific 00:17:47.244 | .nvme_error 00:17:47.244 | .status_code 00:17:47.244 | .command_transient_transport_error' 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 450 > 0 )) 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80796 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80796 ']' 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80796 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80796 00:17:47.502 killing process with pid 80796 00:17:47.502 Received shutdown signal, test time was about 2.000000 seconds 00:17:47.502 00:17:47.502 Latency(us) 00:17:47.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.502 =================================================================================================================== 00:17:47.502 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80796' 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80796 00:17:47.502 09:43:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80796 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80851 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80851 /var/tmp/bperf.sock 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80851 ']' 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.760 09:43:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:47.760 [2024-07-15 09:43:42.222714] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:47.760 [2024-07-15 09:43:42.223225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80851 ] 00:17:48.018 [2024-07-15 09:43:42.361847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.018 [2024-07-15 09:43:42.479985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.275 [2024-07-15 09:43:42.533699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:48.854 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.854 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:48.854 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:48.854 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:49.111 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:49.111 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.111 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:49.111 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.111 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:49.111 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:49.367 nvme0n1 00:17:49.367 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:49.367 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.367 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:49.367 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.367 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:49.367 09:43:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:49.625 Running I/O for 2 seconds... 00:17:49.625 [2024-07-15 09:43:43.908554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fef90 00:17:49.625 [2024-07-15 09:43:43.911311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.625 [2024-07-15 09:43:43.911508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:49.625 [2024-07-15 09:43:43.925084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190feb58 00:17:49.625 [2024-07-15 09:43:43.927731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.625 [2024-07-15 09:43:43.927942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:49.625 [2024-07-15 09:43:43.941381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fe2e8 00:17:49.625 [2024-07-15 09:43:43.944021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.625 [2024-07-15 09:43:43.944060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:49.625 [2024-07-15 09:43:43.957441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fda78 00:17:49.625 [2024-07-15 09:43:43.959873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.625 [2024-07-15 09:43:43.959920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:49.625 [2024-07-15 09:43:43.973223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fd208 00:17:49.625 [2024-07-15 09:43:43.975655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.625 [2024-07-15 09:43:43.975692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:49.625 [2024-07-15 09:43:43.989186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fc998 00:17:49.625 [2024-07-15 09:43:43.991588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.625 [2024-07-15 09:43:43.991624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:49.625 [2024-07-15 09:43:44.005144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fc128 00:17:49.625 [2024-07-15 09:43:44.007538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.625 [2024-07-15 09:43:44.007574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:49.625 [2024-07-15 09:43:44.021251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fb8b8 00:17:49.626 [2024-07-15 09:43:44.023669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.626 [2024-07-15 09:43:44.023707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:49.626 [2024-07-15 09:43:44.037449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fb048 00:17:49.626 [2024-07-15 09:43:44.039823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.626 [2024-07-15 09:43:44.039873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:49.626 [2024-07-15 09:43:44.053370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fa7d8 00:17:49.626 [2024-07-15 09:43:44.055736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.626 [2024-07-15 09:43:44.055772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:49.626 [2024-07-15 09:43:44.069240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f9f68 00:17:49.626 [2024-07-15 09:43:44.071537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.626 [2024-07-15 09:43:44.071573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:49.626 [2024-07-15 09:43:44.085069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f96f8 00:17:49.626 [2024-07-15 09:43:44.087360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.626 [2024-07-15 09:43:44.087395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.100752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f8e88 00:17:49.884 [2024-07-15 09:43:44.103079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.103114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.116638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f8618 00:17:49.884 [2024-07-15 09:43:44.118912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.118946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.132697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f7da8 00:17:49.884 [2024-07-15 09:43:44.135033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.135070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.148824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f7538 00:17:49.884 [2024-07-15 09:43:44.151048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.151085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.164703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f6cc8 00:17:49.884 [2024-07-15 09:43:44.166920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.166954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.180385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f6458 00:17:49.884 [2024-07-15 09:43:44.182577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.182612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.196108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f5be8 00:17:49.884 [2024-07-15 09:43:44.198255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.198292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.212091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f5378 00:17:49.884 [2024-07-15 09:43:44.214242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.214279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.228165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f4b08 00:17:49.884 [2024-07-15 09:43:44.230296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.230334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.243968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f4298 00:17:49.884 [2024-07-15 09:43:44.246099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.246136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.259817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f3a28 00:17:49.884 [2024-07-15 09:43:44.261938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.261977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.275877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f31b8 00:17:49.884 [2024-07-15 09:43:44.277961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.278017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.291859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f2948 00:17:49.884 [2024-07-15 09:43:44.293957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.294009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.307783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f20d8 00:17:49.884 [2024-07-15 09:43:44.309821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.309857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.323731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f1868 00:17:49.884 [2024-07-15 09:43:44.325780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.325816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:49.884 [2024-07-15 09:43:44.339677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f0ff8 00:17:49.884 [2024-07-15 09:43:44.341704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:49.884 [2024-07-15 09:43:44.341743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.355588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f0788 00:17:50.143 [2024-07-15 09:43:44.357556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.357593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.371495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eff18 00:17:50.143 [2024-07-15 09:43:44.373466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.373504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.387647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ef6a8 00:17:50.143 [2024-07-15 09:43:44.389654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.389696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.403921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eee38 00:17:50.143 [2024-07-15 09:43:44.405864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.405915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.420034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ee5c8 00:17:50.143 [2024-07-15 09:43:44.421934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.421970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.436083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190edd58 00:17:50.143 [2024-07-15 09:43:44.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.437997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.452304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ed4e8 00:17:50.143 [2024-07-15 09:43:44.454200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.454237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.468398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ecc78 00:17:50.143 [2024-07-15 09:43:44.470282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.470319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.484593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ec408 00:17:50.143 [2024-07-15 09:43:44.486525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.486558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.500569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ebb98 00:17:50.143 [2024-07-15 09:43:44.502388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.502421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.516556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eb328 00:17:50.143 [2024-07-15 09:43:44.518391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.518426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.532763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eaab8 00:17:50.143 [2024-07-15 09:43:44.534627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.534669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.548873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ea248 00:17:50.143 [2024-07-15 09:43:44.550633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.550669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.565053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e99d8 00:17:50.143 [2024-07-15 09:43:44.566786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.566821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.581097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e9168 00:17:50.143 [2024-07-15 09:43:44.582870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.582943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.143 [2024-07-15 09:43:44.597001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e88f8 00:17:50.143 [2024-07-15 09:43:44.598795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.143 [2024-07-15 09:43:44.598829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.612762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e8088 00:17:50.402 [2024-07-15 09:43:44.614522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.614555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.628961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e7818 00:17:50.402 [2024-07-15 09:43:44.630751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.630784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.645163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e6fa8 00:17:50.402 [2024-07-15 09:43:44.646816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.646849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.660698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e6738 00:17:50.402 [2024-07-15 09:43:44.662404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.662438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.676914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e5ec8 00:17:50.402 [2024-07-15 09:43:44.678630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.678664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.693179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e5658 00:17:50.402 [2024-07-15 09:43:44.694776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.694809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.708624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e4de8 00:17:50.402 [2024-07-15 09:43:44.710251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.710285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.724727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e4578 00:17:50.402 [2024-07-15 09:43:44.726313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.726350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.740852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e3d08 00:17:50.402 [2024-07-15 09:43:44.742459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.742526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.756835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e3498 00:17:50.402 [2024-07-15 09:43:44.758426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.758464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.772825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e2c28 00:17:50.402 [2024-07-15 09:43:44.774328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.774368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.788969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e23b8 00:17:50.402 [2024-07-15 09:43:44.790526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.790580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.805247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e1b48 00:17:50.402 [2024-07-15 09:43:44.806705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.806741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.821518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e12d8 00:17:50.402 [2024-07-15 09:43:44.822955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.822996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.837752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e0a68 00:17:50.402 [2024-07-15 09:43:44.839168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.839206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:50.402 [2024-07-15 09:43:44.854044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e01f8 00:17:50.402 [2024-07-15 09:43:44.855396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.402 [2024-07-15 09:43:44.855431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.870036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190df988 00:17:50.661 [2024-07-15 09:43:44.871406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.871442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.886019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190df118 00:17:50.661 [2024-07-15 09:43:44.887336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.887371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.901866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190de8a8 00:17:50.661 [2024-07-15 09:43:44.903175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.903220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.917714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190de038 00:17:50.661 [2024-07-15 09:43:44.919011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.919048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.940380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190de038 00:17:50.661 [2024-07-15 09:43:44.942903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.942942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.956363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190de8a8 00:17:50.661 [2024-07-15 09:43:44.958882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.958928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.972347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190df118 00:17:50.661 [2024-07-15 09:43:44.974828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.974867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:44.988382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190df988 00:17:50.661 [2024-07-15 09:43:44.990869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:44.990917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.004301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e01f8 00:17:50.661 [2024-07-15 09:43:45.006726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.006761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.020322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e0a68 00:17:50.661 [2024-07-15 09:43:45.022732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.022771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.036574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e12d8 00:17:50.661 [2024-07-15 09:43:45.039062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.039103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.052710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e1b48 00:17:50.661 [2024-07-15 09:43:45.055098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.055136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.069128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e23b8 00:17:50.661 [2024-07-15 09:43:45.071556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.071607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.085816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e2c28 00:17:50.661 [2024-07-15 09:43:45.088258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.088312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.102602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e3498 00:17:50.661 [2024-07-15 09:43:45.105001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.105056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:50.661 [2024-07-15 09:43:45.119101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e3d08 00:17:50.661 [2024-07-15 09:43:45.121461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.661 [2024-07-15 09:43:45.121505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.135586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e4578 00:17:50.922 [2024-07-15 09:43:45.137933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.137978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.152116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e4de8 00:17:50.922 [2024-07-15 09:43:45.154456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.154501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.168677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e5658 00:17:50.922 [2024-07-15 09:43:45.171007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.171052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.185346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e5ec8 00:17:50.922 [2024-07-15 09:43:45.187744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.187782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.201996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e6738 00:17:50.922 [2024-07-15 09:43:45.204267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.204313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.218650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e6fa8 00:17:50.922 [2024-07-15 09:43:45.220902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.220968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.235191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e7818 00:17:50.922 [2024-07-15 09:43:45.237444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.237488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.251616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e8088 00:17:50.922 [2024-07-15 09:43:45.253806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.253851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.268194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e88f8 00:17:50.922 [2024-07-15 09:43:45.270362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.922 [2024-07-15 09:43:45.270408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:50.922 [2024-07-15 09:43:45.284616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e9168 00:17:50.922 [2024-07-15 09:43:45.286812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.923 [2024-07-15 09:43:45.286857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:50.923 [2024-07-15 09:43:45.300682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190e99d8 00:17:50.923 [2024-07-15 09:43:45.302789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.923 [2024-07-15 09:43:45.302825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:50.923 [2024-07-15 09:43:45.317046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ea248 00:17:50.923 [2024-07-15 09:43:45.319158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.923 [2024-07-15 09:43:45.319201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:50.923 [2024-07-15 09:43:45.333160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eaab8 00:17:50.923 [2024-07-15 09:43:45.335213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.923 [2024-07-15 09:43:45.335258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:50.923 [2024-07-15 09:43:45.349238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eb328 00:17:50.923 [2024-07-15 09:43:45.351321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.923 [2024-07-15 09:43:45.351362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:50.923 [2024-07-15 09:43:45.365395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ebb98 00:17:50.923 [2024-07-15 09:43:45.367429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.923 [2024-07-15 09:43:45.367472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:50.923 [2024-07-15 09:43:45.381756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ec408 00:17:50.923 [2024-07-15 09:43:45.383889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.923 [2024-07-15 09:43:45.383934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.398176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ecc78 00:17:51.181 [2024-07-15 09:43:45.400223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.400259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.414642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ed4e8 00:17:51.181 [2024-07-15 09:43:45.416632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.416670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.431091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190edd58 00:17:51.181 [2024-07-15 09:43:45.433071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.433108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.447512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ee5c8 00:17:51.181 [2024-07-15 09:43:45.449477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.449520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.463886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eee38 00:17:51.181 [2024-07-15 09:43:45.465797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.465836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.479806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190ef6a8 00:17:51.181 [2024-07-15 09:43:45.481669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.481702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.495803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190eff18 00:17:51.181 [2024-07-15 09:43:45.497643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.497675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.511799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f0788 00:17:51.181 [2024-07-15 09:43:45.513637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.513670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.527852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f0ff8 00:17:51.181 [2024-07-15 09:43:45.529689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.529722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.544097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f1868 00:17:51.181 [2024-07-15 09:43:45.545919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.545963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.560048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f20d8 00:17:51.181 [2024-07-15 09:43:45.561802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.561834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.576060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f2948 00:17:51.181 [2024-07-15 09:43:45.577835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.577876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.592270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f31b8 00:17:51.181 [2024-07-15 09:43:45.594057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.594093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.608626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f3a28 00:17:51.181 [2024-07-15 09:43:45.610338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.610371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.625018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f4298 00:17:51.181 [2024-07-15 09:43:45.626713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.626745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:51.181 [2024-07-15 09:43:45.640887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f4b08 00:17:51.181 [2024-07-15 09:43:45.642567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.181 [2024-07-15 09:43:45.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:51.439 [2024-07-15 09:43:45.656991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f5378 00:17:51.439 [2024-07-15 09:43:45.658688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.439 [2024-07-15 09:43:45.658725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:51.439 [2024-07-15 09:43:45.672915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f5be8 00:17:51.439 [2024-07-15 09:43:45.674531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.439 [2024-07-15 09:43:45.674562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:51.439 [2024-07-15 09:43:45.688797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f6458 00:17:51.439 [2024-07-15 09:43:45.690405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.439 [2024-07-15 09:43:45.690435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:51.439 [2024-07-15 09:43:45.704682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f6cc8 00:17:51.439 [2024-07-15 09:43:45.706312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.439 [2024-07-15 09:43:45.706347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:51.439 [2024-07-15 09:43:45.720766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f7538 00:17:51.439 [2024-07-15 09:43:45.722403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.439 [2024-07-15 09:43:45.722442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.737002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f7da8 00:17:51.440 [2024-07-15 09:43:45.738633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.738672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.753601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f8618 00:17:51.440 [2024-07-15 09:43:45.755202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.755238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.769672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f8e88 00:17:51.440 [2024-07-15 09:43:45.771235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.771275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.785702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f96f8 00:17:51.440 [2024-07-15 09:43:45.787270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.787305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.801943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190f9f68 00:17:51.440 [2024-07-15 09:43:45.803432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.803468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.818063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fa7d8 00:17:51.440 [2024-07-15 09:43:45.819536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.819573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.834101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fb048 00:17:51.440 [2024-07-15 09:43:45.835526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.835560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.850545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fb8b8 00:17:51.440 [2024-07-15 09:43:45.852048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.852084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.867182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fc128 00:17:51.440 [2024-07-15 09:43:45.868651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.868696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:51.440 [2024-07-15 09:43:45.883482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16360) with pdu=0x2000190fc998 00:17:51.440 [2024-07-15 09:43:45.884854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.440 [2024-07-15 09:43:45.884898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:51.440 00:17:51.440 Latency(us) 00:17:51.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.440 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.440 nvme0n1 : 2.00 15670.37 61.21 0.00 0.00 8161.26 5928.03 31695.59 00:17:51.440 =================================================================================================================== 00:17:51.440 Total : 15670.37 61.21 0.00 0.00 8161.26 5928.03 31695.59 00:17:51.440 0 00:17:51.440 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:51.711 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:51.711 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:51.711 | .driver_specific 00:17:51.711 | .nvme_error 00:17:51.711 | .status_code 00:17:51.711 | .command_transient_transport_error' 00:17:51.711 09:43:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80851 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80851 ']' 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80851 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80851 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:51.970 killing process with pid 80851 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80851' 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80851 00:17:51.970 Received shutdown signal, test time was about 2.000000 seconds 00:17:51.970 00:17:51.970 Latency(us) 00:17:51.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.970 =================================================================================================================== 00:17:51.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.970 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80851 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80913 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80913 /var/tmp/bperf.sock 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80913 ']' 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:52.229 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:52.230 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:52.230 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.230 09:43:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:52.230 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:52.230 Zero copy mechanism will not be used. 00:17:52.230 [2024-07-15 09:43:46.519207] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:52.230 [2024-07-15 09:43:46.519312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80913 ] 00:17:52.230 [2024-07-15 09:43:46.659620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.488 [2024-07-15 09:43:46.784749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.488 [2024-07-15 09:43:46.839846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:53.076 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.076 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:17:53.076 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:53.076 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:53.334 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:53.334 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.334 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.334 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.334 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.334 09:43:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.903 nvme0n1 00:17:53.903 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:53.903 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.903 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:53.903 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.903 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:53.903 09:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:53.903 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.903 Zero copy mechanism will not be used. 00:17:53.903 Running I/O for 2 seconds... 00:17:53.903 [2024-07-15 09:43:48.222190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.903 [2024-07-15 09:43:48.222520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.903 [2024-07-15 09:43:48.222550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.903 [2024-07-15 09:43:48.227436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.903 [2024-07-15 09:43:48.227754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.903 [2024-07-15 09:43:48.227785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.903 [2024-07-15 09:43:48.232697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.903 [2024-07-15 09:43:48.233049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.233073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.237816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.238147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.238176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.242951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.243242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.248024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.248336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.248364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.253204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.253527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.253555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.258282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.258603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.258631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.263406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.263723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.263750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.268515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.268832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.268859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.273755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.274108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.274135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.278948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.279283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.279311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.284118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.284434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.284462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.289272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.289584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.289612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.294348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.294639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.294666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.299513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.299835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.299863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.304775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.305120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.305149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.310025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.310349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.310376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.315117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.315447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.315475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.320185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.320493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.320520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.325278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.325570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.325598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.330287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.330589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.330616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.335362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.335669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.335697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.340441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.340763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.340792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.345590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.345901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.345941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.350634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.350968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.350996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.355839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.356155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.356183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.361041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.361333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.361361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.904 [2024-07-15 09:43:48.366131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:53.904 [2024-07-15 09:43:48.366431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.904 [2024-07-15 09:43:48.366458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.371237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.371527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.371555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.376344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.376643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.376671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.381506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.381806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.381833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.386733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.387067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.387094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.391854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.392200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.392228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.397075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.397395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.397423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.402309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.402615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.402644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.407396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.407707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.407735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.180 [2024-07-15 09:43:48.412491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.180 [2024-07-15 09:43:48.412797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.180 [2024-07-15 09:43:48.412824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.417616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.417940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.417967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.422639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.422960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.422995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.427670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.427998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.428026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.432728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.433062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.433090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.437828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.438141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.438177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.442837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.443160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.443188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.447775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.448109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.448137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.452800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.453142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.453169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.457907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.458200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.458227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.463047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.463354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.463380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.468097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.468401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.468428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.473205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.473515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.473546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.478343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.478639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.478669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.483491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.483787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.483816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.488631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.488939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.488966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.493643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.493951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.493978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.498698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.499001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.499029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.503785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.504089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.504117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.508796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.509120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.509148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.513828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.514133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.514161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.518849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.519155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.519183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.523950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.524253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.524280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.528961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.529261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.529289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.534010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.534314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.534341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.539114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.539413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.539440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.544136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.544428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.181 [2024-07-15 09:43:48.544456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.181 [2024-07-15 09:43:48.549210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.181 [2024-07-15 09:43:48.549502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.549529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.554223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.554518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.554546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.559255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.559548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.559575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.564321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.564624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.564652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.569429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.569722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.569754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.574457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.574749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.574784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.579485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.579779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.579814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.584505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.584795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.584823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.589564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.589855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.589883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.594621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.594939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.594966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.599599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.599889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.599928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.604609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.604917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.604944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.609650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.609961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.609988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.614710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.615043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.615072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.619828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.620162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.620190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.624867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.625199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.625227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.182 [2024-07-15 09:43:48.629988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.182 [2024-07-15 09:43:48.630286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.182 [2024-07-15 09:43:48.630314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.465 [2024-07-15 09:43:48.634986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.465 [2024-07-15 09:43:48.635284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.465 [2024-07-15 09:43:48.635312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.465 [2024-07-15 09:43:48.639992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.465 [2024-07-15 09:43:48.640288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.465 [2024-07-15 09:43:48.640315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.465 [2024-07-15 09:43:48.645045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.465 [2024-07-15 09:43:48.645346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.465 [2024-07-15 09:43:48.645373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.465 [2024-07-15 09:43:48.650163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.465 [2024-07-15 09:43:48.650453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.465 [2024-07-15 09:43:48.650480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.655247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.655538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.655566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.660331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.660621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.660648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.665388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.665681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.665709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.670473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.670768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.670798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.675625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.675951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.675979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.680826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.681137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.681162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.685992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.686292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.686322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.691145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.691445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.691474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.696220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.696514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.696543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.701217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.701510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.701537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.706239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.706529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.706559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.711402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.711696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.711725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.716430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.716723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.716750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.721482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.721778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.721807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.726558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.726849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.726872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.731615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.731937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.731960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.736639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.736949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.736976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.741663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.741972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.741999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.746677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.746996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.747024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.751702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.752032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.752060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.756740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.757082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.757110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.761890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.762223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.762251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.766925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.767247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.767275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.772068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.772361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.772388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.777164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.777471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.777499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.782178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.782469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.782496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.787226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.466 [2024-07-15 09:43:48.787521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.466 [2024-07-15 09:43:48.787549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.466 [2024-07-15 09:43:48.792339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.792633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.792661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.797390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.797687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.797716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.802411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.802721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.802751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.807528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.807841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.807870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.812688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.813018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.813058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.817789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.818144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.822907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.823212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.823240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.827923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.828233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.828260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.833002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.833316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.833353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.838014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.838324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.838351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.843127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.843417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.843445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.848198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.848492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.848519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.853246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.853542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.853569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.858270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.858572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.858599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.863380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.863680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.863707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.868510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.868809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.868837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.873729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.874067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.874095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.878789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.879125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.879152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.883770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.884102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.884130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.888754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.889094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.889121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.893888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.894220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.894248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.898979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.899302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.899328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.904075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.904389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.904416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.909173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.909490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.909521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.914313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.914629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.914656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.919436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.919735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.919762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.924516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.467 [2024-07-15 09:43:48.924815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.467 [2024-07-15 09:43:48.924843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.467 [2024-07-15 09:43:48.929686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.468 [2024-07-15 09:43:48.930015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.468 [2024-07-15 09:43:48.930043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.934812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.935148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.935176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.939856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.940191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.940218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.945013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.945305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.945332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.950071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.950379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.950406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.955038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.955346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.955373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.960045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.960353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.960380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.965159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.965467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.965494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.970229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.970530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.970557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.975262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.975568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.728 [2024-07-15 09:43:48.975596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.728 [2024-07-15 09:43:48.980385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.728 [2024-07-15 09:43:48.980685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:48.980707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:48.985521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:48.985851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:48.985879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:48.990484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:48.990780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:48.990808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:48.995528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:48.995846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:48.995875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.000546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.000859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.000886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.005650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.005980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.006007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.010639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.010961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.010984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.015688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.016007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.016035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.020682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.021001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.025782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.026110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.026137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.030773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.031125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.031153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.035737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.036079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.036107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.040674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.040996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.041049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.045636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.045927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.045962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.050653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.050992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.051019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.055720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.056066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.056094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.060768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.061133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.061161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.065896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.066239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.066266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.070944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.071289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.071316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.076145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.076456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.076481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.081296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.081607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.081635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.086548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.086837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.086880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.091686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.092011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.092038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.096860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.097240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.097268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.102008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.102324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.102350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.107098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.107425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.107453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.112080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.112400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.729 [2024-07-15 09:43:49.112425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.729 [2024-07-15 09:43:49.117163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.729 [2024-07-15 09:43:49.117485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.117512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.122258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.122571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.122598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.127399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.127697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.127725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.132449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.132744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.132766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.137554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.137864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.137906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.142739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.143063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.143091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.147841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.148171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.148199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.153171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.153480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.153513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.158409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.158699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.158727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.163392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.163683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.163710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.168468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.168762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.168789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.173622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.173928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.173986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.178821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.179171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.179198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.184006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.184297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.184324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.730 [2024-07-15 09:43:49.189099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.730 [2024-07-15 09:43:49.189407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.730 [2024-07-15 09:43:49.189433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.989 [2024-07-15 09:43:49.194219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.989 [2024-07-15 09:43:49.194532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.989 [2024-07-15 09:43:49.194558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.989 [2024-07-15 09:43:49.199353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.989 [2024-07-15 09:43:49.199655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.989 [2024-07-15 09:43:49.199682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.989 [2024-07-15 09:43:49.204440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.989 [2024-07-15 09:43:49.204748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.989 [2024-07-15 09:43:49.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.989 [2024-07-15 09:43:49.209520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.989 [2024-07-15 09:43:49.209817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.989 [2024-07-15 09:43:49.209845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.989 [2024-07-15 09:43:49.214632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.989 [2024-07-15 09:43:49.214940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.989 [2024-07-15 09:43:49.214968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.989 [2024-07-15 09:43:49.219741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.989 [2024-07-15 09:43:49.220059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.989 [2024-07-15 09:43:49.220086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.989 [2024-07-15 09:43:49.224827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.225164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.225188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.229956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.230244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.230273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.235066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.235374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.235410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.240240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.240539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.240567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.245329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.245633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.245660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.250518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.250827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.250854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.255831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.256176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.256204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.260911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.261226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.261253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.266114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.266419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.266447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.271298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.271593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.271616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.276348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.276639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.276661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.281392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.281687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.281716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.286483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.286790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.286817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.291646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.291969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.292004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.296698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.297029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.297057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.301705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.302013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.302041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.306719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.307045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.307072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.311730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.312064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.312093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.316735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.317067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.317095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.321857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.322169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.322198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.326930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.327221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.327248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.332003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.332295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.332322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.337023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.337327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.337354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.342120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.342415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.342441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.347207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.347501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.347528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.352251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.352542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.352570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.990 [2024-07-15 09:43:49.357371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.990 [2024-07-15 09:43:49.357667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.990 [2024-07-15 09:43:49.357695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.362375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.362665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.362693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.367432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.367727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.367755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.372640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.372963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.372991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.377820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.378156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.378184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.382956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.383249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.383276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.387985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.388279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.388306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.393035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.393336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.393363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.398132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.398438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.398467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.403216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.403513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.403541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.408305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.408599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.408627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.413396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.413685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.413712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.418523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.418830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.418857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.423585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.423875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.423915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.428675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.428995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.429040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.433789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.434097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.434124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.438880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.439192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.439221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.444048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.444370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.444400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.449178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.449479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.449508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.991 [2024-07-15 09:43:49.454227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:54.991 [2024-07-15 09:43:49.454521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.991 [2024-07-15 09:43:49.454549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.250 [2024-07-15 09:43:49.459330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.250 [2024-07-15 09:43:49.459620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.250 [2024-07-15 09:43:49.459647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.250 [2024-07-15 09:43:49.464402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.250 [2024-07-15 09:43:49.464698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.250 [2024-07-15 09:43:49.464726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.250 [2024-07-15 09:43:49.469492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.250 [2024-07-15 09:43:49.469788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.250 [2024-07-15 09:43:49.469816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.474502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.474807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.474835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.479591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.479899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.479939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.484635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.484940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.484963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.489685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.489991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.490014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.494677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.494980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.495008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.499733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.500041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.500068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.504722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.505038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.505065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.509757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.510065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.510092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.514765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.515072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.515099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.519816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.520125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.520153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.524842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.525159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.525186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.529952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.530255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.530283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.535007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.535306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.535341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.540127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.540430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.540465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.545268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.545566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.545589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.550382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.550678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.550706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.555464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.555763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.555791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.560563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.560857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.560885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.565649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.565953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.565981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.570701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.251 [2024-07-15 09:43:49.571008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.251 [2024-07-15 09:43:49.571031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.251 [2024-07-15 09:43:49.575727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.576031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.576054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.580958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.581265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.581293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.586042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.586334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.586364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.591092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.591385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.591413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.596162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.596481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.601262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.601554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.601582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.606295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.606596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.606624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.611369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.611661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.611689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.616533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.616828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.616856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.621604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.621913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.621941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.626603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.626915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.626942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.631735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.632041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.632064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.636742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.637057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.637084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.641877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.642210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.642238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.647028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.647330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.647357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.652128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.652427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.652455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.657158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.657467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.662220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.662512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.662539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.667244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.667547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.672331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.252 [2024-07-15 09:43:49.672619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.252 [2024-07-15 09:43:49.672646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.252 [2024-07-15 09:43:49.677365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.677659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.677687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.253 [2024-07-15 09:43:49.682455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.682747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.682775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.253 [2024-07-15 09:43:49.687527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.687814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.687841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.253 [2024-07-15 09:43:49.692629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.692934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.692957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.253 [2024-07-15 09:43:49.697612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.697928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.697955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.253 [2024-07-15 09:43:49.702622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.702926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.702953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.253 [2024-07-15 09:43:49.707650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.707957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.707996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.253 [2024-07-15 09:43:49.713254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.253 [2024-07-15 09:43:49.713547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.253 [2024-07-15 09:43:49.713575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.718340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.718632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.718660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.723372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.723663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.723690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.728388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.728677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.728705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.733412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.733702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.733730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.738487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.738779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.738808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.743540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.743840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.743867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.748578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.748886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.748923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.753607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.753913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.753940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.758692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.759002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.759029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.763727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.764033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.764061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.768808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.769121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.769149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.773863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.774172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.774199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.778953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.779249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.779276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.783997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.784295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.784329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.789060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.789358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.789392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.794103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.794394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.794421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.512 [2024-07-15 09:43:49.799154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.512 [2024-07-15 09:43:49.799445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.512 [2024-07-15 09:43:49.799472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.804212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.804503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.804530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.809253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.809544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.809572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.814307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.814600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.814628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.819348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.819639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.819666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.824380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.824681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.824708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.829533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.829837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.829865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.834597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.834902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.834929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.839621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.839929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.839967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.844345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.844414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.844438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.849294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.849363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.849385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.854281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.854346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.854368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.859203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.859272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.859295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.864148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.864216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.864239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.869129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.869194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.869223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.874144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.874213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.874236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.879195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.879263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.879286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.884235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.884301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.884323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.889257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.889340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.889363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.894241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.894311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.894334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.899194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.899268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.899290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.904357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.904425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.904462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.909467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.909534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.909556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.914448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.914516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.914538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.919404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.919471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.919493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.924401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.924468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.924490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.929502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.929569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.929591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.934591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.934657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.934679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.939645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.939710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.939732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.944775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.944844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.944867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.949975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.950047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.950070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.954958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.955043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.955065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.959979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.960045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.960067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.965053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.965120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.965142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.970139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.970205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.970227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.513 [2024-07-15 09:43:49.975353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.513 [2024-07-15 09:43:49.975422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.513 [2024-07-15 09:43:49.975444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:49.980395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:49.980461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:49.980484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:49.985546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:49.985630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:49.985652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:49.990624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:49.990707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:49.990729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:49.995738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:49.995835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:49.995858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.000875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.000969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.000991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.005953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.006034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.006056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.011007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.011073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.011095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.016091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.016156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.016178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.021145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.021213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.021235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.026153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.026224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.026247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.031174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.031256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.031278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.036264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.036364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.036385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.041314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.041383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.041405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.046365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.046477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.046499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.773 [2024-07-15 09:43:50.051550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.773 [2024-07-15 09:43:50.051630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.773 [2024-07-15 09:43:50.051653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.056776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.056856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.056878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.061911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.062008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.062030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.067111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.067177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.067199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.072263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.072329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.072352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.077447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.077535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.077557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.082653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.082735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.082756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.087796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.087876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.087897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.092911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.093048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.093071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.098066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.098156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.103215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.103282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.103306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.108255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.108325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.108347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.113373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.113473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.113495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.118429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.118510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.118532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.123442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.123506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.123528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.128474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.128557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.128579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.133581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.133645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.133667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.138590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.138654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.138676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.143609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.143690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.143712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.148738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.148819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.148840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.153912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.153994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.154019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.158941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.159020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.159043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.163987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.164057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.164079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.169109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.169175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.169198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.174075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.174144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.174167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.179215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.179282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.179304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.184420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.184486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.184508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.189415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.774 [2024-07-15 09:43:50.189480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.774 [2024-07-15 09:43:50.189501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.774 [2024-07-15 09:43:50.194458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.775 [2024-07-15 09:43:50.194541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.775 [2024-07-15 09:43:50.194563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.775 [2024-07-15 09:43:50.199540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.775 [2024-07-15 09:43:50.199613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.775 [2024-07-15 09:43:50.199634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.775 [2024-07-15 09:43:50.204601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.775 [2024-07-15 09:43:50.204664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.775 [2024-07-15 09:43:50.204686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.775 [2024-07-15 09:43:50.209654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa16500) with pdu=0x2000190fef90 00:17:55.775 [2024-07-15 09:43:50.209716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.775 [2024-07-15 09:43:50.209738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.775 00:17:55.775 Latency(us) 00:17:55.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.775 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:55.775 nvme0n1 : 2.00 6080.84 760.10 0.00 0.00 2625.29 2010.76 8817.57 00:17:55.775 =================================================================================================================== 00:17:55.775 Total : 6080.84 760.10 0.00 0.00 2625.29 2010.76 8817.57 00:17:55.775 0 00:17:55.775 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:56.033 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:56.033 | .driver_specific 00:17:56.033 | .nvme_error 00:17:56.033 | .status_code 00:17:56.033 | .command_transient_transport_error' 00:17:56.033 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:56.033 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 392 > 0 )) 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80913 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80913 ']' 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80913 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80913 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:56.291 killing process with pid 80913 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80913' 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80913 00:17:56.291 Received shutdown signal, test time was about 2.000000 seconds 00:17:56.291 00:17:56.291 Latency(us) 00:17:56.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.291 =================================================================================================================== 00:17:56.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.291 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80913 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80704 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80704 ']' 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80704 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80704 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:56.550 killing process with pid 80704 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80704' 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80704 00:17:56.550 09:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80704 00:17:56.807 00:17:56.807 real 0m18.649s 00:17:56.807 user 0m36.206s 00:17:56.807 sys 0m4.678s 00:17:56.807 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:56.807 09:43:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.808 ************************************ 00:17:56.808 END TEST nvmf_digest_error 00:17:56.808 ************************************ 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.808 rmmod nvme_tcp 00:17:56.808 rmmod nvme_fabrics 00:17:56.808 rmmod nvme_keyring 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80704 ']' 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80704 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80704 ']' 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80704 00:17:56.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80704) - No such process 00:17:56.808 Process with pid 80704 is not found 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80704 is not found' 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:56.808 00:17:56.808 real 0m38.443s 00:17:56.808 user 1m13.723s 00:17:56.808 sys 0m9.742s 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:56.808 09:43:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:56.808 ************************************ 00:17:56.808 END TEST nvmf_digest 00:17:56.808 ************************************ 00:17:57.067 09:43:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:57.067 09:43:51 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:17:57.067 09:43:51 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:17:57.067 09:43:51 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:57.067 09:43:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:57.067 09:43:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.067 09:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.067 ************************************ 00:17:57.067 START TEST nvmf_host_multipath 00:17:57.067 ************************************ 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:57.067 * Looking for test storage... 00:17:57.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.067 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:57.068 Cannot find device "nvmf_tgt_br" 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.068 Cannot find device "nvmf_tgt_br2" 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:57.068 Cannot find device "nvmf_tgt_br" 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:57.068 Cannot find device "nvmf_tgt_br2" 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:57.068 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:57.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:57.327 00:17:57.327 --- 10.0.0.2 ping statistics --- 00:17:57.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.327 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:57.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:57.327 00:17:57.327 --- 10.0.0.3 ping statistics --- 00:17:57.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.327 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:57.327 00:17:57.327 --- 10.0.0.1 ping statistics --- 00:17:57.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.327 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81184 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81184 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81184 ']' 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.327 09:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:57.585 [2024-07-15 09:43:51.808065] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:57.585 [2024-07-15 09:43:51.808182] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.585 [2024-07-15 09:43:51.948561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:57.844 [2024-07-15 09:43:52.086325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.844 [2024-07-15 09:43:52.086401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.844 [2024-07-15 09:43:52.086416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.844 [2024-07-15 09:43:52.086427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.844 [2024-07-15 09:43:52.086436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.844 [2024-07-15 09:43:52.086584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.844 [2024-07-15 09:43:52.086602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.844 [2024-07-15 09:43:52.143929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:58.409 09:43:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.409 09:43:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:17:58.409 09:43:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.409 09:43:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.409 09:43:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:58.667 09:43:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.667 09:43:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81184 00:17:58.667 09:43:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.667 [2024-07-15 09:43:53.124010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.929 09:43:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:58.929 Malloc0 00:17:59.187 09:43:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:59.187 09:43:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.753 09:43:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.753 [2024-07-15 09:43:54.141995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.753 09:43:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:00.012 [2024-07-15 09:43:54.398129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81234 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81234 /var/tmp/bdevperf.sock 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81234 ']' 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.012 09:43:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:00.947 09:43:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.947 09:43:55 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:00.947 09:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:01.515 09:43:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:01.772 Nvme0n1 00:18:01.772 09:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:02.029 Nvme0n1 00:18:02.029 09:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:02.029 09:43:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:02.960 09:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:02.960 09:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:03.218 09:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:03.476 09:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:03.476 09:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81285 00:18:03.476 09:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:03.476 09:43:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81184 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:10.031 09:44:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:10.031 09:44:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.032 Attaching 4 probes... 00:18:10.032 @path[10.0.0.2, 4421]: 17022 00:18:10.032 @path[10.0.0.2, 4421]: 18046 00:18:10.032 @path[10.0.0.2, 4421]: 17682 00:18:10.032 @path[10.0.0.2, 4421]: 17667 00:18:10.032 @path[10.0.0.2, 4421]: 17920 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81285 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:10.032 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:10.289 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:10.289 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81397 00:18:10.289 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:10.289 09:44:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81184 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:16.842 09:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:16.842 09:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:16.842 09:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:16.842 09:44:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.842 Attaching 4 probes... 00:18:16.842 @path[10.0.0.2, 4420]: 17450 00:18:16.842 @path[10.0.0.2, 4420]: 17732 00:18:16.842 @path[10.0.0.2, 4420]: 17934 00:18:16.842 @path[10.0.0.2, 4420]: 18244 00:18:16.842 @path[10.0.0.2, 4420]: 17882 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81397 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:16.842 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:17.101 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:17.101 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81510 00:18:17.101 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:17.101 09:44:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81184 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.663 Attaching 4 probes... 00:18:23.663 @path[10.0.0.2, 4421]: 11826 00:18:23.663 @path[10.0.0.2, 4421]: 15613 00:18:23.663 @path[10.0.0.2, 4421]: 15529 00:18:23.663 @path[10.0.0.2, 4421]: 15428 00:18:23.663 @path[10.0.0.2, 4421]: 15592 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81510 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:23.663 09:44:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:23.663 09:44:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:23.921 09:44:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:23.921 09:44:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81628 00:18:23.921 09:44:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81184 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:23.921 09:44:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.481 Attaching 4 probes... 00:18:30.481 00:18:30.481 00:18:30.481 00:18:30.481 00:18:30.481 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81628 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:30.481 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:30.482 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:30.482 09:44:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:30.775 09:44:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:30.775 09:44:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81740 00:18:30.775 09:44:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:30.775 09:44:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81184 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:37.338 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.339 Attaching 4 probes... 00:18:37.339 @path[10.0.0.2, 4421]: 16685 00:18:37.339 @path[10.0.0.2, 4421]: 15708 00:18:37.339 @path[10.0.0.2, 4421]: 17196 00:18:37.339 @path[10.0.0.2, 4421]: 17617 00:18:37.339 @path[10.0.0.2, 4421]: 17599 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81740 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:37.339 [2024-07-15 09:44:31.725026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faff70 is same with the state(5) to be set 00:18:37.339 [2024-07-15 09:44:31.725086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faff70 is same with the state(5) to be set 00:18:37.339 [2024-07-15 09:44:31.725098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1faff70 is same with the state(5) to be set 00:18:37.339 09:44:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:38.715 09:44:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:38.715 09:44:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81864 00:18:38.715 09:44:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81184 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:38.715 09:44:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:45.276 09:44:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:45.276 09:44:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.276 Attaching 4 probes... 00:18:45.276 @path[10.0.0.2, 4420]: 16949 00:18:45.276 @path[10.0.0.2, 4420]: 17478 00:18:45.276 @path[10.0.0.2, 4420]: 17168 00:18:45.276 @path[10.0.0.2, 4420]: 16422 00:18:45.276 @path[10.0.0.2, 4420]: 16899 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81864 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:45.276 [2024-07-15 09:44:39.302376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:45.276 09:44:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:51.828 09:44:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:51.828 09:44:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82033 00:18:51.828 09:44:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81184 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:51.828 09:44:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:57.137 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:57.138 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.396 Attaching 4 probes... 00:18:57.396 @path[10.0.0.2, 4421]: 16877 00:18:57.396 @path[10.0.0.2, 4421]: 17244 00:18:57.396 @path[10.0.0.2, 4421]: 17182 00:18:57.396 @path[10.0.0.2, 4421]: 17266 00:18:57.396 @path[10.0.0.2, 4421]: 17187 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82033 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81234 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81234 ']' 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81234 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.396 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81234 00:18:57.663 killing process with pid 81234 00:18:57.663 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:57.663 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:57.663 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81234' 00:18:57.663 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81234 00:18:57.663 09:44:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81234 00:18:57.663 Connection closed with partial response: 00:18:57.663 00:18:57.663 00:18:57.663 09:44:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81234 00:18:57.663 09:44:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:57.663 [2024-07-15 09:43:54.464290] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:57.663 [2024-07-15 09:43:54.464484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81234 ] 00:18:57.663 [2024-07-15 09:43:54.596359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.663 [2024-07-15 09:43:54.713591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.663 [2024-07-15 09:43:54.766692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:57.663 Running I/O for 90 seconds... 00:18:57.663 [2024-07-15 09:44:04.656836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.656936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.663 [2024-07-15 09:44:04.657578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.657964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.657985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.658000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.658021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.658036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.658058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.658072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.658097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.658112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.658133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.658149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:57.663 [2024-07-15 09:44:04.658170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.663 [2024-07-15 09:44:04.658185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.658624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.658971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.658986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.664 [2024-07-15 09:44:04.659863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.664 [2024-07-15 09:44:04.659913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:57.664 [2024-07-15 09:44:04.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.659951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.659973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.659988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.660024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.660060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.660097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.660138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.660175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.660960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.660975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.661006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.661024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.661051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.661067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.661088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.661103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.661125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.661140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.661172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.661188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.661209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.661224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.661246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.661261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.662792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.665 [2024-07-15 09:44:04.662831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.662871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.662890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.662928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.662944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.662966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.662989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.663010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.663025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.663047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.663062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.663084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.663099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.663120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.665 [2024-07-15 09:44:04.663136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:57.665 [2024-07-15 09:44:04.663300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:04.663702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:04.663717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.666 [2024-07-15 09:44:11.257782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.257818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.257856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.257906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.257957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.257978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.257993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:57.666 [2024-07-15 09:44:11.258356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.666 [2024-07-15 09:44:11.258370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.258710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.258754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.258790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.258826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.258869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.258921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.258979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.258995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.667 [2024-07-15 09:44:11.259554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:57.667 [2024-07-15 09:44:11.259575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.667 [2024-07-15 09:44:11.259589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.259944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.259984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.260852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.260959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.260974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.261007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.261030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.261052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.261067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.261088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.261102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.261123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.261138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.261159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.668 [2024-07-15 09:44:11.261173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:57.668 [2024-07-15 09:44:11.261194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.668 [2024-07-15 09:44:11.261217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.261727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.261742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:11.262426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:11.262804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:11.262824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.235964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.236377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.669 [2024-07-15 09:44:18.236676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.236980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.237019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.237047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.669 [2024-07-15 09:44:18.237064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:57.669 [2024-07-15 09:44:18.237087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.237615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.237963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.237978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.670 [2024-07-15 09:44:18.238567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.238624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.238662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.238700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.670 [2024-07-15 09:44:18.238737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:57.670 [2024-07-15 09:44:18.238759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.238774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.238797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.238820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.238845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.238859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.238882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.238909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.238934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.238949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.238972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.238986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.239024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.239062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.239099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.239138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.239176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.671 [2024-07-15 09:44:18.239213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.239972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.239987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.240010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.240024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.240047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.240062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.240084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.240099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.240122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.240137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:57.671 [2024-07-15 09:44:18.240159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.671 [2024-07-15 09:44:18.240174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.240809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.240846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.240904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.240946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.240974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.241005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.241044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.241082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.241119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.672 [2024-07-15 09:44:18.241157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:18.241457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:18.241472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.672 [2024-07-15 09:44:31.725089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.672 [2024-07-15 09:44:31.725122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.672 [2024-07-15 09:44:31.725150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.672 [2024-07-15 09:44:31.725178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7100 is same with the state(5) to be set 00:18:57.672 [2024-07-15 09:44:31.725280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:31.725303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:31.725341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:31.725381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:31.725422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.672 [2024-07-15 09:44:31.725460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.672 [2024-07-15 09:44:31.725478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.725788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.725820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.725849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.725878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.725926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.725956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.725971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.725991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.673 [2024-07-15 09:44:31.726534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.673 [2024-07-15 09:44:31.726791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.673 [2024-07-15 09:44:31.726805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.726820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.726834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.726864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.726879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.726904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.726922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.726936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.726958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.726976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.726992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.727780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.727984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.727997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.728013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.674 [2024-07-15 09:44:31.728027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.728043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.728057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.728072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.728086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.728105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.674 [2024-07-15 09:44:31.728119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.674 [2024-07-15 09:44:31.728141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.675 [2024-07-15 09:44:31.728521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.728955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.728970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.675 [2024-07-15 09:44:31.729245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.675 [2024-07-15 09:44:31.729310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.675 [2024-07-15 09:44:31.729328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.676 [2024-07-15 09:44:31.729339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23336 len:8 PRP1 0x0 PRP2 0x0 00:18:57.676 [2024-07-15 09:44:31.729353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.676 [2024-07-15 09:44:31.729419] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e3d6d0 was disconnected and freed. reset controller. 00:18:57.676 [2024-07-15 09:44:31.730556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.676 [2024-07-15 09:44:31.730595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db7100 (9): Bad file descriptor 00:18:57.676 [2024-07-15 09:44:31.730940] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.676 [2024-07-15 09:44:31.730976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db7100 with addr=10.0.0.2, port=4421 00:18:57.676 [2024-07-15 09:44:31.730993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db7100 is same with the state(5) to be set 00:18:57.676 [2024-07-15 09:44:31.731027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db7100 (9): Bad file descriptor 00:18:57.676 [2024-07-15 09:44:31.731059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.676 [2024-07-15 09:44:31.731077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:57.676 [2024-07-15 09:44:31.731092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.676 [2024-07-15 09:44:31.731123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:57.676 [2024-07-15 09:44:31.731140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.676 [2024-07-15 09:44:41.801409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:57.676 Received shutdown signal, test time was about 55.435747 seconds 00:18:57.676 00:18:57.676 Latency(us) 00:18:57.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.676 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:57.676 Verification LBA range: start 0x0 length 0x4000 00:18:57.676 Nvme0n1 : 55.43 7297.02 28.50 0.00 0.00 17511.19 1124.54 7046430.72 00:18:57.676 =================================================================================================================== 00:18:57.676 Total : 7297.02 28.50 0.00 0.00 17511.19 1124.54 7046430.72 00:18:57.676 09:44:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.242 rmmod nvme_tcp 00:18:58.242 rmmod nvme_fabrics 00:18:58.242 rmmod nvme_keyring 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81184 ']' 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81184 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81184 ']' 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81184 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:18:58.242 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.243 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81184 00:18:58.243 killing process with pid 81184 00:18:58.243 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:58.243 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:58.243 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81184' 00:18:58.243 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81184 00:18:58.243 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81184 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:58.501 00:18:58.501 real 1m1.579s 00:18:58.501 user 2m51.188s 00:18:58.501 sys 0m18.295s 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:58.501 09:44:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:58.501 ************************************ 00:18:58.501 END TEST nvmf_host_multipath 00:18:58.501 ************************************ 00:18:58.501 09:44:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:58.501 09:44:52 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:58.501 09:44:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:58.501 09:44:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.501 09:44:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:58.501 ************************************ 00:18:58.501 START TEST nvmf_timeout 00:18:58.501 ************************************ 00:18:58.501 09:44:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:58.760 * Looking for test storage... 00:18:58.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:58.760 09:44:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.760 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:58.760 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.760 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.760 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.760 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.760 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.761 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.761 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.761 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.761 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.761 09:44:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:58.761 Cannot find device "nvmf_tgt_br" 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.761 Cannot find device "nvmf_tgt_br2" 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:58.761 Cannot find device "nvmf_tgt_br" 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:58.761 Cannot find device "nvmf_tgt_br2" 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.761 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:59.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:59.019 00:18:59.019 --- 10.0.0.2 ping statistics --- 00:18:59.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.019 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:59.019 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:59.019 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:18:59.019 00:18:59.019 --- 10.0.0.3 ping statistics --- 00:18:59.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.019 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:59.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:59.019 00:18:59.019 --- 10.0.0.1 ping statistics --- 00:18:59.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.019 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82351 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82351 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82351 ']' 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.019 09:44:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:59.019 [2024-07-15 09:44:53.458509] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:59.019 [2024-07-15 09:44:53.458617] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.276 [2024-07-15 09:44:53.619170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:59.276 [2024-07-15 09:44:53.737186] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.276 [2024-07-15 09:44:53.737251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.276 [2024-07-15 09:44:53.737263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.276 [2024-07-15 09:44:53.737272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.276 [2024-07-15 09:44:53.737279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.276 [2024-07-15 09:44:53.737453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.277 [2024-07-15 09:44:53.737463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.535 [2024-07-15 09:44:53.791085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.102 09:44:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.360 [2024-07-15 09:44:54.733302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.360 09:44:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:00.618 Malloc0 00:19:00.618 09:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:00.877 09:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.145 09:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.416 [2024-07-15 09:44:55.702278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.416 09:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82399 00:19:01.416 09:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:01.416 09:44:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82399 /var/tmp/bdevperf.sock 00:19:01.416 09:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82399 ']' 00:19:01.417 09:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.417 09:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.417 09:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.417 09:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.417 09:44:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:01.417 [2024-07-15 09:44:55.781069] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:01.417 [2024-07-15 09:44:55.781204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82399 ] 00:19:01.675 [2024-07-15 09:44:55.924924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.675 [2024-07-15 09:44:56.088203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.933 [2024-07-15 09:44:56.143221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:02.499 09:44:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.499 09:44:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:02.499 09:44:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:02.757 09:44:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:03.034 NVMe0n1 00:19:03.034 09:44:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82424 00:19:03.034 09:44:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:03.034 09:44:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.292 Running I/O for 10 seconds... 00:19:04.224 09:44:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.482 [2024-07-15 09:44:58.755782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.482 [2024-07-15 09:44:58.755849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.755875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.755886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.755912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.755923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.755934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.755944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.755955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.755965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.755976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.755986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.755997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.482 [2024-07-15 09:44:58.756454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.482 [2024-07-15 09:44:58.756465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.756981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.756991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:68408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.483 [2024-07-15 09:44:58.757374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.483 [2024-07-15 09:44:58.757384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.757985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.757994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.484 [2024-07-15 09:44:58.758231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.484 [2024-07-15 09:44:58.758242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.485 [2024-07-15 09:44:58.758252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.485 [2024-07-15 09:44:58.758567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.485 [2024-07-15 09:44:58.758587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e934d0 is same with the state(5) to be set 00:19:04.485 [2024-07-15 09:44:58.758612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.485 [2024-07-15 09:44:58.758621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.485 [2024-07-15 09:44:58.758630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68760 len:8 PRP1 0x0 PRP2 0x0 00:19:04.485 [2024-07-15 09:44:58.758639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758694] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e934d0 was disconnected and freed. reset controller. 00:19:04.485 [2024-07-15 09:44:58.758782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.485 [2024-07-15 09:44:58.758808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.485 [2024-07-15 09:44:58.758830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.485 [2024-07-15 09:44:58.758849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.485 [2024-07-15 09:44:58.758868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.485 [2024-07-15 09:44:58.758877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48d40 is same with the state(5) to be set 00:19:04.485 [2024-07-15 09:44:58.759109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.485 [2024-07-15 09:44:58.759140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e48d40 (9): Bad file descriptor 00:19:04.485 [2024-07-15 09:44:58.759236] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.485 [2024-07-15 09:44:58.759258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e48d40 with addr=10.0.0.2, port=4420 00:19:04.485 [2024-07-15 09:44:58.759269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48d40 is same with the state(5) to be set 00:19:04.485 [2024-07-15 09:44:58.759287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e48d40 (9): Bad file descriptor 00:19:04.485 [2024-07-15 09:44:58.759309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.485 [2024-07-15 09:44:58.759319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.485 [2024-07-15 09:44:58.759330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.485 [2024-07-15 09:44:58.759350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.485 [2024-07-15 09:44:58.759361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.485 09:44:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:06.472 [2024-07-15 09:45:00.759665] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:06.472 [2024-07-15 09:45:00.759724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e48d40 with addr=10.0.0.2, port=4420 00:19:06.472 [2024-07-15 09:45:00.759740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48d40 is same with the state(5) to be set 00:19:06.472 [2024-07-15 09:45:00.759766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e48d40 (9): Bad file descriptor 00:19:06.472 [2024-07-15 09:45:00.759786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:06.472 [2024-07-15 09:45:00.759796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:06.472 [2024-07-15 09:45:00.759808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:06.472 [2024-07-15 09:45:00.759836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:06.472 [2024-07-15 09:45:00.759848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:06.472 09:45:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:06.472 09:45:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:06.472 09:45:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:06.730 09:45:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:06.730 09:45:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:06.730 09:45:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:06.730 09:45:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:06.989 09:45:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:06.989 09:45:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:08.388 [2024-07-15 09:45:02.760150] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:08.388 [2024-07-15 09:45:02.760232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e48d40 with addr=10.0.0.2, port=4420 00:19:08.388 [2024-07-15 09:45:02.760250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e48d40 is same with the state(5) to be set 00:19:08.388 [2024-07-15 09:45:02.760278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e48d40 (9): Bad file descriptor 00:19:08.388 [2024-07-15 09:45:02.760299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.388 [2024-07-15 09:45:02.760310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:08.388 [2024-07-15 09:45:02.760321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.388 [2024-07-15 09:45:02.760351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.388 [2024-07-15 09:45:02.760364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:10.305 [2024-07-15 09:45:04.760560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:10.305 [2024-07-15 09:45:04.760632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:10.305 [2024-07-15 09:45:04.760645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:10.305 [2024-07-15 09:45:04.760657] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:10.305 [2024-07-15 09:45:04.760686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:11.680 00:19:11.680 Latency(us) 00:19:11.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.680 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.680 Verification LBA range: start 0x0 length 0x4000 00:19:11.680 NVMe0n1 : 8.22 1029.99 4.02 15.57 0.00 122256.24 3991.74 7015926.69 00:19:11.680 =================================================================================================================== 00:19:11.680 Total : 1029.99 4.02 15.57 0.00 122256.24 3991.74 7015926.69 00:19:11.680 0 00:19:11.939 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:11.939 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:11.939 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82424 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82399 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82399 ']' 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82399 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.505 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82399 00:19:12.764 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:12.764 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:12.764 killing process with pid 82399 00:19:12.764 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82399' 00:19:12.764 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82399 00:19:12.764 Received shutdown signal, test time was about 9.440534 seconds 00:19:12.764 00:19:12.764 Latency(us) 00:19:12.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.764 =================================================================================================================== 00:19:12.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.764 09:45:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82399 00:19:12.764 09:45:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.025 [2024-07-15 09:45:07.470678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.025 09:45:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82546 00:19:13.025 09:45:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82546 /var/tmp/bdevperf.sock 00:19:13.283 09:45:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:13.284 09:45:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82546 ']' 00:19:13.284 09:45:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.284 09:45:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.284 09:45:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.284 09:45:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.284 09:45:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:13.284 [2024-07-15 09:45:07.548912] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:13.284 [2024-07-15 09:45:07.549050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82546 ] 00:19:13.284 [2024-07-15 09:45:07.689264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.542 [2024-07-15 09:45:07.812183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.542 [2024-07-15 09:45:07.867074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:14.109 09:45:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.109 09:45:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:14.109 09:45:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:14.677 09:45:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:14.677 NVMe0n1 00:19:14.935 09:45:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82570 00:19:14.935 09:45:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:14.935 09:45:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.935 Running I/O for 10 seconds... 00:19:15.872 09:45:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.132 [2024-07-15 09:45:10.419119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.132 [2024-07-15 09:45:10.419364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.132 [2024-07-15 09:45:10.419385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.132 [2024-07-15 09:45:10.419406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.132 [2024-07-15 09:45:10.419426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.132 [2024-07-15 09:45:10.419447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.132 [2024-07-15 09:45:10.419468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.132 [2024-07-15 09:45:10.419479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.419886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.419924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.419947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.419968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.419979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.419989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.420009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.420030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.420050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.133 [2024-07-15 09:45:10.420072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.133 [2024-07-15 09:45:10.420386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.133 [2024-07-15 09:45:10.420398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.420572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.420988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.421018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.421039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.421060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.421080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.134 [2024-07-15 09:45:10.421101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.134 [2024-07-15 09:45:10.421313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.134 [2024-07-15 09:45:10.421323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.135 [2024-07-15 09:45:10.421333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.135 [2024-07-15 09:45:10.421353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.135 [2024-07-15 09:45:10.421373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.135 [2024-07-15 09:45:10.421394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.135 [2024-07-15 09:45:10.421414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.135 [2024-07-15 09:45:10.421434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.135 [2024-07-15 09:45:10.421455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.135 [2024-07-15 09:45:10.421777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93d4d0 is same with the state(5) to be set 00:19:16.135 [2024-07-15 09:45:10.421799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.421807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.421820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65976 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.421830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.421851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.421859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66304 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.421874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.421900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.421910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66312 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.421919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.421935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.421943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66320 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.421952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.421970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.421978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66328 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.421987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.421996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.422004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.422011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66336 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.422020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.422030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.422037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.422044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66344 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.422053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.422062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.422069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.422077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66352 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.422086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.422095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.135 [2024-07-15 09:45:10.422102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.135 [2024-07-15 09:45:10.422114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66360 len:8 PRP1 0x0 PRP2 0x0 00:19:16.135 [2024-07-15 09:45:10.422123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.135 [2024-07-15 09:45:10.422181] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x93d4d0 was disconnected and freed. reset controller. 00:19:16.135 [2024-07-15 09:45:10.422447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.135 [2024-07-15 09:45:10.422524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:16.135 [2024-07-15 09:45:10.422628] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.135 [2024-07-15 09:45:10.422648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f2d40 with addr=10.0.0.2, port=4420 00:19:16.135 [2024-07-15 09:45:10.422670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f2d40 is same with the state(5) to be set 00:19:16.135 [2024-07-15 09:45:10.422687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:16.135 [2024-07-15 09:45:10.422702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.135 [2024-07-15 09:45:10.422711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:16.135 [2024-07-15 09:45:10.422722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.135 [2024-07-15 09:45:10.422741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:16.135 [2024-07-15 09:45:10.422752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.135 09:45:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:17.070 [2024-07-15 09:45:11.422904] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.070 [2024-07-15 09:45:11.422976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f2d40 with addr=10.0.0.2, port=4420 00:19:17.070 [2024-07-15 09:45:11.422992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f2d40 is same with the state(5) to be set 00:19:17.070 [2024-07-15 09:45:11.423021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:17.070 [2024-07-15 09:45:11.423039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.070 [2024-07-15 09:45:11.423049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:17.070 [2024-07-15 09:45:11.423061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.070 [2024-07-15 09:45:11.423089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:17.070 [2024-07-15 09:45:11.423102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:17.070 09:45:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.329 [2024-07-15 09:45:11.710187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.329 09:45:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82570 00:19:18.263 [2024-07-15 09:45:12.436874] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:24.817 00:19:24.817 Latency(us) 00:19:24.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.817 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.817 Verification LBA range: start 0x0 length 0x4000 00:19:24.817 NVMe0n1 : 10.01 6200.02 24.22 0.00 0.00 20601.41 1325.61 3035150.89 00:19:24.817 =================================================================================================================== 00:19:24.818 Total : 6200.02 24.22 0.00 0.00 20601.41 1325.61 3035150.89 00:19:24.818 0 00:19:24.818 09:45:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82673 00:19:24.818 09:45:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:24.818 09:45:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.075 Running I/O for 10 seconds... 00:19:26.010 09:45:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.269 [2024-07-15 09:45:20.576293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.269 [2024-07-15 09:45:20.576375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.269 [2024-07-15 09:45:20.576400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.269 [2024-07-15 09:45:20.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.269 [2024-07-15 09:45:20.576425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.269 [2024-07-15 09:45:20.576434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.269 [2024-07-15 09:45:20.576446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.269 [2024-07-15 09:45:20.576456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.269 [2024-07-15 09:45:20.576469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.576985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.576996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:26.270 [2024-07-15 09:45:20.577782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.270 [2024-07-15 09:45:20.577793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.270 [2024-07-15 09:45:20.577803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.577988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.577999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.578985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.578994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.579005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.579015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.579027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.579037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.579048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.579058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.579070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.579079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.271 [2024-07-15 09:45:20.579091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.271 [2024-07-15 09:45:20.579101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.272 [2024-07-15 09:45:20.579112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.272 [2024-07-15 09:45:20.579121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.272 [2024-07-15 09:45:20.579133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:26.272 [2024-07-15 09:45:20.579143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.272 [2024-07-15 09:45:20.579154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x938140 is same with the state(5) to be set 00:19:26.272 [2024-07-15 09:45:20.579172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:26.272 [2024-07-15 09:45:20.579180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:26.272 [2024-07-15 09:45:20.579189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62288 len:8 PRP1 0x0 PRP2 0x0 00:19:26.272 [2024-07-15 09:45:20.579209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:26.272 [2024-07-15 09:45:20.579275] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x938140 was disconnected and freed. reset controller. 00:19:26.272 [2024-07-15 09:45:20.579501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.272 [2024-07-15 09:45:20.579581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:26.272 [2024-07-15 09:45:20.579704] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.272 [2024-07-15 09:45:20.579725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f2d40 with addr=10.0.0.2, port=4420 00:19:26.272 [2024-07-15 09:45:20.579736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f2d40 is same with the state(5) to be set 00:19:26.272 [2024-07-15 09:45:20.579755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:26.272 [2024-07-15 09:45:20.579770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.272 [2024-07-15 09:45:20.579780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.272 [2024-07-15 09:45:20.579790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.272 [2024-07-15 09:45:20.579810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.272 [2024-07-15 09:45:20.579822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.272 09:45:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:27.204 [2024-07-15 09:45:21.580005] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.204 [2024-07-15 09:45:21.580097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f2d40 with addr=10.0.0.2, port=4420 00:19:27.204 [2024-07-15 09:45:21.580123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f2d40 is same with the state(5) to be set 00:19:27.204 [2024-07-15 09:45:21.580152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:27.204 [2024-07-15 09:45:21.580172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.204 [2024-07-15 09:45:21.580181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:27.204 [2024-07-15 09:45:21.580192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.204 [2024-07-15 09:45:21.580222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:27.204 [2024-07-15 09:45:21.580234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.137 [2024-07-15 09:45:22.580401] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.137 [2024-07-15 09:45:22.580484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f2d40 with addr=10.0.0.2, port=4420 00:19:28.137 [2024-07-15 09:45:22.580501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f2d40 is same with the state(5) to be set 00:19:28.137 [2024-07-15 09:45:22.580530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:28.137 [2024-07-15 09:45:22.580557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.137 [2024-07-15 09:45:22.580568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.138 [2024-07-15 09:45:22.580579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.138 [2024-07-15 09:45:22.580608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.138 [2024-07-15 09:45:22.580620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.526 [2024-07-15 09:45:23.584251] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.526 [2024-07-15 09:45:23.584330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f2d40 with addr=10.0.0.2, port=4420 00:19:29.526 [2024-07-15 09:45:23.584347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f2d40 is same with the state(5) to be set 00:19:29.526 [2024-07-15 09:45:23.584599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f2d40 (9): Bad file descriptor 00:19:29.526 [2024-07-15 09:45:23.584843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:29.527 [2024-07-15 09:45:23.584856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:29.527 [2024-07-15 09:45:23.584866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.527 [2024-07-15 09:45:23.588689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.527 [2024-07-15 09:45:23.588720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.527 09:45:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.527 [2024-07-15 09:45:23.826185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.527 09:45:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82673 00:19:30.459 [2024-07-15 09:45:24.626548] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.720 00:19:35.720 Latency(us) 00:19:35.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.720 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:35.720 Verification LBA range: start 0x0 length 0x4000 00:19:35.720 NVMe0n1 : 10.01 5356.93 20.93 3698.70 0.00 14104.71 688.87 3019898.88 00:19:35.720 =================================================================================================================== 00:19:35.720 Total : 5356.93 20.93 3698.70 0.00 14104.71 0.00 3019898.88 00:19:35.720 0 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82546 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82546 ']' 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82546 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82546 00:19:35.720 killing process with pid 82546 00:19:35.720 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.720 00:19:35.720 Latency(us) 00:19:35.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.720 =================================================================================================================== 00:19:35.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82546' 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82546 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82546 00:19:35.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82790 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82790 /var/tmp/bdevperf.sock 00:19:35.720 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82790 ']' 00:19:35.721 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.721 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.721 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.721 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.721 09:45:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:35.721 [2024-07-15 09:45:29.784382] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:35.721 [2024-07-15 09:45:29.784703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82790 ] 00:19:35.721 [2024-07-15 09:45:29.924648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.721 [2024-07-15 09:45:30.039800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.721 [2024-07-15 09:45:30.092074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:36.286 09:45:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.286 09:45:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:36.286 09:45:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82790 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:36.286 09:45:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82806 00:19:36.286 09:45:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:36.854 09:45:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:36.854 NVMe0n1 00:19:37.112 09:45:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82842 00:19:37.112 09:45:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:37.112 09:45:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:37.112 Running I/O for 10 seconds... 00:19:38.059 09:45:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.321 [2024-07-15 09:45:32.593175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.321 [2024-07-15 09:45:32.593948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.593957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.593966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.593975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.593983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.593992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f9b80 is same with the state(5) to be set 00:19:38.322 [2024-07-15 09:45:32.594360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.322 [2024-07-15 09:45:32.594826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.322 [2024-07-15 09:45:32.594835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.594845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.594854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.594865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.594891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.594914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.594926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.594935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.594946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.594955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.594966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.594975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.594986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.594994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.323 [2024-07-15 09:45:32.595700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.323 [2024-07-15 09:45:32.595711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.595983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.595995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.324 [2024-07-15 09:45:32.596572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.324 [2024-07-15 09:45:32.596583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.596983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.596992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.597002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.325 [2024-07-15 09:45:32.597011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.597022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32310 is same with the state(5) to be set 00:19:38.325 [2024-07-15 09:45:32.597034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:38.325 [2024-07-15 09:45:32.597042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:38.325 [2024-07-15 09:45:32.597050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:19:38.325 [2024-07-15 09:45:32.597064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.325 [2024-07-15 09:45:32.597126] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e32310 was disconnected and freed. reset controller. 00:19:38.325 [2024-07-15 09:45:32.597404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.325 [2024-07-15 09:45:32.597488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3c00 (9): Bad file descriptor 00:19:38.325 [2024-07-15 09:45:32.597600] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.325 [2024-07-15 09:45:32.597621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc3c00 with addr=10.0.0.2, port=4420 00:19:38.325 [2024-07-15 09:45:32.597632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3c00 is same with the state(5) to be set 00:19:38.325 [2024-07-15 09:45:32.597649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3c00 (9): Bad file descriptor 00:19:38.325 [2024-07-15 09:45:32.597665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.325 [2024-07-15 09:45:32.597675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.325 [2024-07-15 09:45:32.597685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.325 [2024-07-15 09:45:32.597705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.325 [2024-07-15 09:45:32.597715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.325 09:45:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82842 00:19:40.226 [2024-07-15 09:45:34.597955] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.226 [2024-07-15 09:45:34.598018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc3c00 with addr=10.0.0.2, port=4420 00:19:40.226 [2024-07-15 09:45:34.598036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3c00 is same with the state(5) to be set 00:19:40.226 [2024-07-15 09:45:34.598061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3c00 (9): Bad file descriptor 00:19:40.226 [2024-07-15 09:45:34.598092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.226 [2024-07-15 09:45:34.598104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.226 [2024-07-15 09:45:34.598115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.226 [2024-07-15 09:45:34.598142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.226 [2024-07-15 09:45:34.598153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.758 [2024-07-15 09:45:36.598359] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.758 [2024-07-15 09:45:36.598423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dc3c00 with addr=10.0.0.2, port=4420 00:19:42.758 [2024-07-15 09:45:36.598439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc3c00 is same with the state(5) to be set 00:19:42.758 [2024-07-15 09:45:36.598464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc3c00 (9): Bad file descriptor 00:19:42.759 [2024-07-15 09:45:36.598484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:42.759 [2024-07-15 09:45:36.598494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:42.759 [2024-07-15 09:45:36.598505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:42.759 [2024-07-15 09:45:36.598532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.759 [2024-07-15 09:45:36.598543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.134 [2024-07-15 09:45:38.598711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.134 [2024-07-15 09:45:38.598781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.134 [2024-07-15 09:45:38.598795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:44.134 [2024-07-15 09:45:38.598806] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:44.134 [2024-07-15 09:45:38.598831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.522 00:19:45.522 Latency(us) 00:19:45.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.523 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:45.523 NVMe0n1 : 8.17 2176.44 8.50 15.67 0.00 58294.09 7804.74 7015926.69 00:19:45.523 =================================================================================================================== 00:19:45.523 Total : 2176.44 8.50 15.67 0.00 58294.09 7804.74 7015926.69 00:19:45.523 0 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.523 Attaching 5 probes... 00:19:45.523 1292.818098: reset bdev controller NVMe0 00:19:45.523 1292.956519: reconnect bdev controller NVMe0 00:19:45.523 3293.237234: reconnect delay bdev controller NVMe0 00:19:45.523 3293.259836: reconnect bdev controller NVMe0 00:19:45.523 5293.673019: reconnect delay bdev controller NVMe0 00:19:45.523 5293.693905: reconnect bdev controller NVMe0 00:19:45.523 7294.106721: reconnect delay bdev controller NVMe0 00:19:45.523 7294.131186: reconnect bdev controller NVMe0 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82806 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82790 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82790 ']' 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82790 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82790 00:19:45.523 killing process with pid 82790 00:19:45.523 Received shutdown signal, test time was about 8.221954 seconds 00:19:45.523 00:19:45.523 Latency(us) 00:19:45.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.523 =================================================================================================================== 00:19:45.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82790' 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82790 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82790 00:19:45.523 09:45:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.792 rmmod nvme_tcp 00:19:45.792 rmmod nvme_fabrics 00:19:45.792 rmmod nvme_keyring 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82351 ']' 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82351 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82351 ']' 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82351 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82351 00:19:45.792 killing process with pid 82351 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82351' 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82351 00:19:45.792 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82351 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.049 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.307 09:45:40 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:46.307 ************************************ 00:19:46.307 END TEST nvmf_timeout 00:19:46.307 ************************************ 00:19:46.307 00:19:46.307 real 0m47.627s 00:19:46.307 user 2m20.469s 00:19:46.307 sys 0m5.604s 00:19:46.307 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.307 09:45:40 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:46.307 09:45:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:46.307 09:45:40 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:19:46.307 09:45:40 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:19:46.307 09:45:40 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.307 09:45:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.307 09:45:40 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:19:46.307 ************************************ 00:19:46.307 END TEST nvmf_tcp 00:19:46.307 ************************************ 00:19:46.307 00:19:46.307 real 12m24.219s 00:19:46.307 user 30m16.073s 00:19:46.307 sys 3m3.688s 00:19:46.307 09:45:40 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:46.307 09:45:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:46.307 09:45:40 -- common/autotest_common.sh@1142 -- # return 0 00:19:46.307 09:45:40 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:19:46.307 09:45:40 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:46.307 09:45:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:46.307 09:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:46.307 09:45:40 -- common/autotest_common.sh@10 -- # set +x 00:19:46.307 ************************************ 00:19:46.307 START TEST nvmf_dif 00:19:46.307 ************************************ 00:19:46.307 09:45:40 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:46.307 * Looking for test storage... 00:19:46.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:46.307 09:45:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.307 09:45:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.307 09:45:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.307 09:45:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.307 09:45:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.307 09:45:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.307 09:45:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.307 09:45:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:46.307 09:45:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:19:46.307 09:45:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.564 09:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:46.564 09:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:46.564 09:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:46.564 09:45:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:46.564 09:45:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.564 09:45:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:46.564 09:45:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:46.564 Cannot find device "nvmf_tgt_br" 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@155 -- # true 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.564 Cannot find device "nvmf_tgt_br2" 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@156 -- # true 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:46.564 Cannot find device "nvmf_tgt_br" 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@158 -- # true 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:46.564 Cannot find device "nvmf_tgt_br2" 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@159 -- # true 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:46.564 09:45:40 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:46.564 09:45:41 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:46.564 09:45:41 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:46.564 09:45:41 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:46.564 09:45:41 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:46.564 09:45:41 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:46.564 09:45:41 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:46.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:19:46.822 00:19:46.822 --- 10.0.0.2 ping statistics --- 00:19:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.822 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:46.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:19:46.822 00:19:46.822 --- 10.0.0.3 ping statistics --- 00:19:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.822 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:46.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:46.822 00:19:46.822 --- 10.0.0.1 ping statistics --- 00:19:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.822 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:19:46.822 09:45:41 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:47.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:47.080 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:47.080 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.080 09:45:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:47.080 09:45:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83275 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:47.080 09:45:41 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83275 00:19:47.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83275 ']' 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.080 09:45:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:47.337 [2024-07-15 09:45:41.562611] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:47.337 [2024-07-15 09:45:41.562701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.337 [2024-07-15 09:45:41.704747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.595 [2024-07-15 09:45:41.830559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.595 [2024-07-15 09:45:41.830625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.595 [2024-07-15 09:45:41.830640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.595 [2024-07-15 09:45:41.830650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.595 [2024-07-15 09:45:41.830659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.595 [2024-07-15 09:45:41.830696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.595 [2024-07-15 09:45:41.887505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:48.159 09:45:42 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.159 09:45:42 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:19:48.159 09:45:42 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.159 09:45:42 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.159 09:45:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 09:45:42 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.417 09:45:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:48.417 09:45:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:48.417 09:45:42 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.417 09:45:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 [2024-07-15 09:45:42.644644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.417 09:45:42 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.417 09:45:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:48.417 09:45:42 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:48.417 09:45:42 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.417 09:45:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 ************************************ 00:19:48.417 START TEST fio_dif_1_default 00:19:48.417 ************************************ 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 bdev_null0 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.417 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:48.418 [2024-07-15 09:45:42.692722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.418 { 00:19:48.418 "params": { 00:19:48.418 "name": "Nvme$subsystem", 00:19:48.418 "trtype": "$TEST_TRANSPORT", 00:19:48.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.418 "adrfam": "ipv4", 00:19:48.418 "trsvcid": "$NVMF_PORT", 00:19:48.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.418 "hdgst": ${hdgst:-false}, 00:19:48.418 "ddgst": ${ddgst:-false} 00:19:48.418 }, 00:19:48.418 "method": "bdev_nvme_attach_controller" 00:19:48.418 } 00:19:48.418 EOF 00:19:48.418 )") 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:48.418 "params": { 00:19:48.418 "name": "Nvme0", 00:19:48.418 "trtype": "tcp", 00:19:48.418 "traddr": "10.0.0.2", 00:19:48.418 "adrfam": "ipv4", 00:19:48.418 "trsvcid": "4420", 00:19:48.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:48.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:48.418 "hdgst": false, 00:19:48.418 "ddgst": false 00:19:48.418 }, 00:19:48.418 "method": "bdev_nvme_attach_controller" 00:19:48.418 }' 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:48.418 09:45:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:48.676 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:48.676 fio-3.35 00:19:48.676 Starting 1 thread 00:20:00.892 00:20:00.892 filename0: (groupid=0, jobs=1): err= 0: pid=83347: Mon Jul 15 09:45:53 2024 00:20:00.892 read: IOPS=8783, BW=34.3MiB/s (36.0MB/s)(343MiB/10001msec) 00:20:00.892 slat (usec): min=6, max=450, avg= 8.81, stdev= 3.68 00:20:00.892 clat (usec): min=258, max=4325, avg=429.31, stdev=35.14 00:20:00.892 lat (usec): min=266, max=4350, avg=438.12, stdev=35.74 00:20:00.892 clat percentiles (usec): 00:20:00.892 | 1.00th=[ 400], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:20:00.892 | 30.00th=[ 420], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 433], 00:20:00.892 | 70.00th=[ 437], 80.00th=[ 441], 90.00th=[ 449], 95.00th=[ 457], 00:20:00.892 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[ 644], 99.95th=[ 816], 00:20:00.892 | 99.99th=[ 1336] 00:20:00.892 bw ( KiB/s): min=34016, max=35456, per=100.00%, avg=35151.16, stdev=328.31, samples=19 00:20:00.892 iops : min= 8504, max= 8864, avg=8787.79, stdev=82.08, samples=19 00:20:00.892 lat (usec) : 500=99.44%, 750=0.49%, 1000=0.05% 00:20:00.892 lat (msec) : 2=0.01%, 10=0.01% 00:20:00.892 cpu : usr=84.71%, sys=13.03%, ctx=117, majf=0, minf=0 00:20:00.892 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:00.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.892 issued rwts: total=87846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.892 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:00.892 00:20:00.892 Run status group 0 (all jobs): 00:20:00.892 READ: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=343MiB (360MB), run=10001-10001msec 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:00.892 ************************************ 00:20:00.892 END TEST fio_dif_1_default 00:20:00.892 ************************************ 00:20:00.892 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.892 00:20:00.892 real 0m10.997s 00:20:00.893 user 0m9.089s 00:20:00.893 sys 0m1.588s 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 09:45:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:00.893 09:45:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:00.893 09:45:53 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:00.893 09:45:53 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 ************************************ 00:20:00.893 START TEST fio_dif_1_multi_subsystems 00:20:00.893 ************************************ 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 bdev_null0 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 [2024-07-15 09:45:53.737697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 bdev_null1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.893 { 00:20:00.893 "params": { 00:20:00.893 "name": "Nvme$subsystem", 00:20:00.893 "trtype": "$TEST_TRANSPORT", 00:20:00.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.893 "adrfam": "ipv4", 00:20:00.893 "trsvcid": "$NVMF_PORT", 00:20:00.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.893 "hdgst": ${hdgst:-false}, 00:20:00.893 "ddgst": ${ddgst:-false} 00:20:00.893 }, 00:20:00.893 "method": "bdev_nvme_attach_controller" 00:20:00.893 } 00:20:00.893 EOF 00:20:00.893 )") 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:00.893 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.894 { 00:20:00.894 "params": { 00:20:00.894 "name": "Nvme$subsystem", 00:20:00.894 "trtype": "$TEST_TRANSPORT", 00:20:00.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.894 "adrfam": "ipv4", 00:20:00.894 "trsvcid": "$NVMF_PORT", 00:20:00.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.894 "hdgst": ${hdgst:-false}, 00:20:00.894 "ddgst": ${ddgst:-false} 00:20:00.894 }, 00:20:00.894 "method": "bdev_nvme_attach_controller" 00:20:00.894 } 00:20:00.894 EOF 00:20:00.894 )") 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:00.894 "params": { 00:20:00.894 "name": "Nvme0", 00:20:00.894 "trtype": "tcp", 00:20:00.894 "traddr": "10.0.0.2", 00:20:00.894 "adrfam": "ipv4", 00:20:00.894 "trsvcid": "4420", 00:20:00.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:00.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:00.894 "hdgst": false, 00:20:00.894 "ddgst": false 00:20:00.894 }, 00:20:00.894 "method": "bdev_nvme_attach_controller" 00:20:00.894 },{ 00:20:00.894 "params": { 00:20:00.894 "name": "Nvme1", 00:20:00.894 "trtype": "tcp", 00:20:00.894 "traddr": "10.0.0.2", 00:20:00.894 "adrfam": "ipv4", 00:20:00.894 "trsvcid": "4420", 00:20:00.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.894 "hdgst": false, 00:20:00.894 "ddgst": false 00:20:00.894 }, 00:20:00.894 "method": "bdev_nvme_attach_controller" 00:20:00.894 }' 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:00.894 09:45:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:00.894 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:00.894 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:00.894 fio-3.35 00:20:00.894 Starting 2 threads 00:20:10.943 00:20:10.943 filename0: (groupid=0, jobs=1): err= 0: pid=83506: Mon Jul 15 09:46:04 2024 00:20:10.943 read: IOPS=4871, BW=19.0MiB/s (20.0MB/s)(190MiB/10001msec) 00:20:10.943 slat (nsec): min=7570, max=44720, avg=13187.01, stdev=3256.10 00:20:10.943 clat (usec): min=631, max=2348, avg=785.35, stdev=41.95 00:20:10.943 lat (usec): min=641, max=2377, avg=798.54, stdev=43.09 00:20:10.943 clat percentiles (usec): 00:20:10.943 | 1.00th=[ 693], 5.00th=[ 709], 10.00th=[ 725], 20.00th=[ 758], 00:20:10.944 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:10.944 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 824], 95.00th=[ 840], 00:20:10.944 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 1303], 00:20:10.944 | 99.99th=[ 1549] 00:20:10.944 bw ( KiB/s): min=19360, max=19744, per=50.05%, avg=19508.21, stdev=101.31, samples=19 00:20:10.944 iops : min= 4840, max= 4936, avg=4877.05, stdev=25.33, samples=19 00:20:10.944 lat (usec) : 750=16.37%, 1000=83.56% 00:20:10.944 lat (msec) : 2=0.06%, 4=0.01% 00:20:10.944 cpu : usr=88.69%, sys=9.92%, ctx=6, majf=0, minf=9 00:20:10.944 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.944 issued rwts: total=48724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.944 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:10.944 filename1: (groupid=0, jobs=1): err= 0: pid=83507: Mon Jul 15 09:46:04 2024 00:20:10.944 read: IOPS=4871, BW=19.0MiB/s (20.0MB/s)(190MiB/10001msec) 00:20:10.944 slat (usec): min=7, max=113, avg=13.47, stdev= 3.51 00:20:10.944 clat (usec): min=649, max=2400, avg=783.43, stdev=31.04 00:20:10.944 lat (usec): min=657, max=2438, avg=796.90, stdev=31.51 00:20:10.944 clat percentiles (usec): 00:20:10.944 | 1.00th=[ 725], 5.00th=[ 742], 10.00th=[ 750], 20.00th=[ 766], 00:20:10.944 | 30.00th=[ 775], 40.00th=[ 775], 50.00th=[ 783], 60.00th=[ 791], 00:20:10.944 | 70.00th=[ 799], 80.00th=[ 807], 90.00th=[ 816], 95.00th=[ 824], 00:20:10.944 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 906], 99.95th=[ 1303], 00:20:10.944 | 99.99th=[ 1467] 00:20:10.944 bw ( KiB/s): min=19360, max=19744, per=50.05%, avg=19508.21, stdev=101.31, samples=19 00:20:10.944 iops : min= 4840, max= 4936, avg=4877.05, stdev=25.33, samples=19 00:20:10.944 lat (usec) : 750=8.36%, 1000=91.58% 00:20:10.944 lat (msec) : 2=0.06%, 4=0.01% 00:20:10.944 cpu : usr=89.76%, sys=8.82%, ctx=125, majf=0, minf=0 00:20:10.944 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.944 issued rwts: total=48724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.944 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:10.944 00:20:10.944 Run status group 0 (all jobs): 00:20:10.944 READ: bw=38.1MiB/s (39.9MB/s), 19.0MiB/s-19.0MiB/s (20.0MB/s-20.0MB/s), io=381MiB (399MB), run=10001-10001msec 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 ************************************ 00:20:10.944 END TEST fio_dif_1_multi_subsystems 00:20:10.944 ************************************ 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 00:20:10.944 real 0m11.112s 00:20:10.944 user 0m18.582s 00:20:10.944 sys 0m2.166s 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 09:46:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:10.944 09:46:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:10.944 09:46:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:10.944 09:46:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 ************************************ 00:20:10.944 START TEST fio_dif_rand_params 00:20:10.944 ************************************ 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 bdev_null0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 [2024-07-15 09:46:04.896045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:10.944 { 00:20:10.944 "params": { 00:20:10.944 "name": "Nvme$subsystem", 00:20:10.944 "trtype": "$TEST_TRANSPORT", 00:20:10.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.944 "adrfam": "ipv4", 00:20:10.944 "trsvcid": "$NVMF_PORT", 00:20:10.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.944 "hdgst": ${hdgst:-false}, 00:20:10.944 "ddgst": ${ddgst:-false} 00:20:10.944 }, 00:20:10.944 "method": "bdev_nvme_attach_controller" 00:20:10.944 } 00:20:10.944 EOF 00:20:10.944 )") 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:10.944 09:46:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:10.945 "params": { 00:20:10.945 "name": "Nvme0", 00:20:10.945 "trtype": "tcp", 00:20:10.945 "traddr": "10.0.0.2", 00:20:10.945 "adrfam": "ipv4", 00:20:10.945 "trsvcid": "4420", 00:20:10.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:10.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:10.945 "hdgst": false, 00:20:10.945 "ddgst": false 00:20:10.945 }, 00:20:10.945 "method": "bdev_nvme_attach_controller" 00:20:10.945 }' 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:10.945 09:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:10.945 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:10.945 ... 00:20:10.945 fio-3.35 00:20:10.945 Starting 3 threads 00:20:16.218 00:20:16.218 filename0: (groupid=0, jobs=1): err= 0: pid=83663: Mon Jul 15 09:46:10 2024 00:20:16.218 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5007msec) 00:20:16.218 slat (nsec): min=7146, max=46115, avg=15715.98, stdev=5092.60 00:20:16.218 clat (usec): min=11280, max=15860, avg=11419.97, stdev=222.21 00:20:16.218 lat (usec): min=11292, max=15890, avg=11435.69, stdev=222.70 00:20:16.218 clat percentiles (usec): 00:20:16.218 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:16.218 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:20:16.218 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11469], 00:20:16.218 | 99.00th=[11600], 99.50th=[11994], 99.90th=[15795], 99.95th=[15926], 00:20:16.218 | 99.99th=[15926] 00:20:16.218 bw ( KiB/s): min=33024, max=33792, per=33.30%, avg=33484.80, stdev=396.59, samples=10 00:20:16.218 iops : min= 258, max= 264, avg=261.60, stdev= 3.10, samples=10 00:20:16.218 lat (msec) : 20=100.00% 00:20:16.218 cpu : usr=91.55%, sys=7.87%, ctx=69, majf=0, minf=9 00:20:16.218 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.218 issued rwts: total=1311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.218 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:16.218 filename0: (groupid=0, jobs=1): err= 0: pid=83664: Mon Jul 15 09:46:10 2024 00:20:16.218 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5004msec) 00:20:16.218 slat (nsec): min=7244, max=41842, avg=16503.30, stdev=4263.75 00:20:16.218 clat (usec): min=11306, max=13801, avg=11413.64, stdev=127.83 00:20:16.218 lat (usec): min=11320, max=13820, avg=11430.14, stdev=128.10 00:20:16.218 clat percentiles (usec): 00:20:16.218 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:16.218 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:20:16.218 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11469], 00:20:16.218 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13829], 99.95th=[13829], 00:20:16.218 | 99.99th=[13829] 00:20:16.218 bw ( KiB/s): min=33024, max=33792, per=33.30%, avg=33484.80, stdev=396.59, samples=10 00:20:16.218 iops : min= 258, max= 264, avg=261.60, stdev= 3.10, samples=10 00:20:16.218 lat (msec) : 20=100.00% 00:20:16.218 cpu : usr=91.29%, sys=8.22%, ctx=7, majf=0, minf=9 00:20:16.218 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.218 issued rwts: total=1311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.218 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:16.218 filename0: (groupid=0, jobs=1): err= 0: pid=83665: Mon Jul 15 09:46:10 2024 00:20:16.218 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5005msec) 00:20:16.218 slat (nsec): min=7240, max=42004, avg=16399.01, stdev=4524.32 00:20:16.218 clat (usec): min=11274, max=14072, avg=11414.85, stdev=140.62 00:20:16.218 lat (usec): min=11288, max=14092, avg=11431.25, stdev=140.92 00:20:16.218 clat percentiles (usec): 00:20:16.218 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:16.218 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11469], 00:20:16.218 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11469], 00:20:16.218 | 99.00th=[11600], 99.50th=[11994], 99.90th=[14091], 99.95th=[14091], 00:20:16.218 | 99.99th=[14091] 00:20:16.218 bw ( KiB/s): min=33024, max=33792, per=33.31%, avg=33491.40, stdev=388.54, samples=10 00:20:16.218 iops : min= 258, max= 264, avg=261.60, stdev= 3.10, samples=10 00:20:16.218 lat (msec) : 20=100.00% 00:20:16.218 cpu : usr=90.99%, sys=8.49%, ctx=5, majf=0, minf=9 00:20:16.218 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:16.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.219 issued rwts: total=1311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.219 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:16.219 00:20:16.219 Run status group 0 (all jobs): 00:20:16.219 READ: bw=98.2MiB/s (103MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=492MiB (516MB), run=5004-5007msec 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 bdev_null0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 [2024-07-15 09:46:10.883803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 bdev_null1 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 bdev_null2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.486 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.744 { 00:20:16.744 "params": { 00:20:16.744 "name": "Nvme$subsystem", 00:20:16.744 "trtype": "$TEST_TRANSPORT", 00:20:16.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.744 "adrfam": "ipv4", 00:20:16.744 "trsvcid": "$NVMF_PORT", 00:20:16.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.744 "hdgst": ${hdgst:-false}, 00:20:16.744 "ddgst": ${ddgst:-false} 00:20:16.744 }, 00:20:16.744 "method": "bdev_nvme_attach_controller" 00:20:16.744 } 00:20:16.744 EOF 00:20:16.744 )") 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.744 { 00:20:16.744 "params": { 00:20:16.744 "name": "Nvme$subsystem", 00:20:16.744 "trtype": "$TEST_TRANSPORT", 00:20:16.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.744 "adrfam": "ipv4", 00:20:16.744 "trsvcid": "$NVMF_PORT", 00:20:16.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.744 "hdgst": ${hdgst:-false}, 00:20:16.744 "ddgst": ${ddgst:-false} 00:20:16.744 }, 00:20:16.744 "method": "bdev_nvme_attach_controller" 00:20:16.744 } 00:20:16.744 EOF 00:20:16.744 )") 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:16.744 { 00:20:16.744 "params": { 00:20:16.744 "name": "Nvme$subsystem", 00:20:16.744 "trtype": "$TEST_TRANSPORT", 00:20:16.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.744 "adrfam": "ipv4", 00:20:16.744 "trsvcid": "$NVMF_PORT", 00:20:16.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.744 "hdgst": ${hdgst:-false}, 00:20:16.744 "ddgst": ${ddgst:-false} 00:20:16.744 }, 00:20:16.744 "method": "bdev_nvme_attach_controller" 00:20:16.744 } 00:20:16.744 EOF 00:20:16.744 )") 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:16.744 "params": { 00:20:16.744 "name": "Nvme0", 00:20:16.744 "trtype": "tcp", 00:20:16.744 "traddr": "10.0.0.2", 00:20:16.744 "adrfam": "ipv4", 00:20:16.744 "trsvcid": "4420", 00:20:16.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:16.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:16.744 "hdgst": false, 00:20:16.744 "ddgst": false 00:20:16.744 }, 00:20:16.744 "method": "bdev_nvme_attach_controller" 00:20:16.744 },{ 00:20:16.744 "params": { 00:20:16.744 "name": "Nvme1", 00:20:16.744 "trtype": "tcp", 00:20:16.744 "traddr": "10.0.0.2", 00:20:16.744 "adrfam": "ipv4", 00:20:16.744 "trsvcid": "4420", 00:20:16.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.744 "hdgst": false, 00:20:16.744 "ddgst": false 00:20:16.744 }, 00:20:16.744 "method": "bdev_nvme_attach_controller" 00:20:16.744 },{ 00:20:16.744 "params": { 00:20:16.744 "name": "Nvme2", 00:20:16.744 "trtype": "tcp", 00:20:16.744 "traddr": "10.0.0.2", 00:20:16.744 "adrfam": "ipv4", 00:20:16.744 "trsvcid": "4420", 00:20:16.744 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:16.744 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:16.744 "hdgst": false, 00:20:16.744 "ddgst": false 00:20:16.744 }, 00:20:16.744 "method": "bdev_nvme_attach_controller" 00:20:16.744 }' 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:16.744 09:46:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:16.744 09:46:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:16.744 09:46:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:16.744 09:46:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:16.744 09:46:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:16.744 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:16.744 ... 00:20:16.744 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:16.744 ... 00:20:16.744 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:16.744 ... 00:20:16.744 fio-3.35 00:20:16.744 Starting 24 threads 00:20:28.937 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83761: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=218, BW=874KiB/s (895kB/s)(8780KiB/10049msec) 00:20:28.937 slat (usec): min=7, max=8029, avg=29.16, stdev=314.90 00:20:28.937 clat (msec): min=4, max=167, avg=72.99, stdev=25.74 00:20:28.937 lat (msec): min=4, max=167, avg=73.02, stdev=25.74 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 6], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 53], 00:20:28.937 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:20:28.937 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 117], 00:20:28.937 | 99.00th=[ 133], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:20:28.937 | 99.99th=[ 167] 00:20:28.937 bw ( KiB/s): min= 590, max= 1651, per=4.34%, avg=874.45, stdev=218.80, samples=20 00:20:28.937 iops : min= 147, max= 412, avg=218.55, stdev=54.59, samples=20 00:20:28.937 lat (msec) : 10=2.92%, 20=0.73%, 50=13.58%, 100=65.42%, 250=17.36% 00:20:28.937 cpu : usr=43.34%, sys=2.46%, ctx=1356, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=1.2%, 4=4.3%, 8=78.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83762: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=203, BW=814KiB/s (834kB/s)(8176KiB/10039msec) 00:20:28.937 slat (usec): min=5, max=4025, avg=20.88, stdev=125.61 00:20:28.937 clat (msec): min=15, max=139, avg=78.37, stdev=20.76 00:20:28.937 lat (msec): min=15, max=139, avg=78.39, stdev=20.76 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:20:28.937 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:20:28.937 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 115], 00:20:28.937 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 140], 99.95th=[ 140], 00:20:28.937 | 99.99th=[ 140] 00:20:28.937 bw ( KiB/s): min= 528, max= 1136, per=4.02%, avg=811.10, stdev=132.56, samples=20 00:20:28.937 iops : min= 132, max= 284, avg=202.75, stdev=33.17, samples=20 00:20:28.937 lat (msec) : 20=0.78%, 50=7.63%, 100=73.83%, 250=17.76% 00:20:28.937 cpu : usr=40.96%, sys=1.99%, ctx=1211, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=73.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=89.9%, 8=8.1%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83763: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=204, BW=818KiB/s (838kB/s)(8200KiB/10022msec) 00:20:28.937 slat (usec): min=4, max=8033, avg=24.04, stdev=250.38 00:20:28.937 clat (msec): min=28, max=144, avg=77.99, stdev=20.66 00:20:28.937 lat (msec): min=28, max=144, avg=78.01, stdev=20.66 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:20:28.937 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:20:28.937 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 112], 00:20:28.937 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:20:28.937 | 99.99th=[ 144] 00:20:28.937 bw ( KiB/s): min= 640, max= 976, per=4.03%, avg=813.65, stdev=99.47, samples=20 00:20:28.937 iops : min= 160, max= 244, avg=203.40, stdev=24.87, samples=20 00:20:28.937 lat (msec) : 50=10.68%, 100=71.61%, 250=17.71% 00:20:28.937 cpu : usr=36.17%, sys=1.81%, ctx=1080, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=2.2%, 4=9.1%, 8=73.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=89.5%, 8=8.5%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83764: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=217, BW=870KiB/s (890kB/s)(8716KiB/10023msec) 00:20:28.937 slat (usec): min=3, max=5031, avg=19.62, stdev=116.01 00:20:28.937 clat (msec): min=21, max=148, avg=73.49, stdev=21.56 00:20:28.937 lat (msec): min=21, max=148, avg=73.50, stdev=21.56 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 54], 00:20:28.937 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:28.937 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 114], 00:20:28.937 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:20:28.937 | 99.99th=[ 148] 00:20:28.937 bw ( KiB/s): min= 656, max= 1072, per=4.29%, avg=865.20, stdev=126.07, samples=20 00:20:28.937 iops : min= 164, max= 268, avg=216.30, stdev=31.52, samples=20 00:20:28.937 lat (msec) : 50=15.92%, 100=70.12%, 250=13.95% 00:20:28.937 cpu : usr=41.63%, sys=2.18%, ctx=1325, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83765: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=216, BW=867KiB/s (888kB/s)(8712KiB/10051msec) 00:20:28.937 slat (usec): min=6, max=4051, avg=19.90, stdev=149.20 00:20:28.937 clat (msec): min=4, max=156, avg=73.63, stdev=24.99 00:20:28.937 lat (msec): min=4, max=156, avg=73.65, stdev=24.99 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 6], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 56], 00:20:28.937 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:28.937 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 115], 00:20:28.937 | 99.00th=[ 128], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 153], 00:20:28.937 | 99.99th=[ 157] 00:20:28.937 bw ( KiB/s): min= 608, max= 1576, per=4.30%, avg=867.25, stdev=202.72, samples=20 00:20:28.937 iops : min= 152, max= 394, avg=216.80, stdev=50.69, samples=20 00:20:28.937 lat (msec) : 10=2.85%, 20=1.01%, 50=11.66%, 100=67.08%, 250=17.40% 00:20:28.937 cpu : usr=36.50%, sys=1.66%, ctx=1074, majf=0, minf=0 00:20:28.937 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83766: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=214, BW=859KiB/s (880kB/s)(8624KiB/10040msec) 00:20:28.937 slat (usec): min=7, max=8026, avg=31.23, stdev=325.51 00:20:28.937 clat (msec): min=20, max=142, avg=74.33, stdev=21.62 00:20:28.937 lat (msec): min=20, max=142, avg=74.36, stdev=21.61 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 26], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:20:28.937 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:28.937 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 115], 00:20:28.937 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:20:28.937 | 99.99th=[ 144] 00:20:28.937 bw ( KiB/s): min= 616, max= 1109, per=4.24%, avg=855.75, stdev=131.99, samples=20 00:20:28.937 iops : min= 154, max= 277, avg=213.90, stdev=33.00, samples=20 00:20:28.937 lat (msec) : 50=16.00%, 100=69.06%, 250=14.94% 00:20:28.937 cpu : usr=33.72%, sys=1.91%, ctx=1110, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83767: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=201, BW=807KiB/s (826kB/s)(8100KiB/10039msec) 00:20:28.937 slat (usec): min=7, max=8025, avg=27.39, stdev=278.64 00:20:28.937 clat (msec): min=20, max=146, avg=79.12, stdev=22.00 00:20:28.937 lat (msec): min=20, max=146, avg=79.14, stdev=22.01 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:20:28.937 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:20:28.937 | 70.00th=[ 88], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 117], 00:20:28.937 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 146], 00:20:28.937 | 99.99th=[ 146] 00:20:28.937 bw ( KiB/s): min= 528, max= 1021, per=3.99%, avg=803.35, stdev=131.24, samples=20 00:20:28.937 iops : min= 132, max= 255, avg=200.80, stdev=32.82, samples=20 00:20:28.937 lat (msec) : 50=9.23%, 100=69.98%, 250=20.79% 00:20:28.937 cpu : usr=38.30%, sys=1.80%, ctx=1226, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=2.8%, 4=11.2%, 8=71.3%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=90.4%, 8=7.2%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename0: (groupid=0, jobs=1): err= 0: pid=83768: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=209, BW=839KiB/s (860kB/s)(8400KiB/10006msec) 00:20:28.937 slat (usec): min=4, max=8031, avg=22.03, stdev=195.70 00:20:28.937 clat (msec): min=28, max=149, avg=76.12, stdev=20.82 00:20:28.937 lat (msec): min=28, max=149, avg=76.14, stdev=20.82 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 59], 00:20:28.937 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 79], 00:20:28.937 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 116], 00:20:28.937 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 150], 00:20:28.937 | 99.99th=[ 150] 00:20:28.937 bw ( KiB/s): min= 608, max= 1048, per=4.12%, avg=831.53, stdev=123.13, samples=19 00:20:28.937 iops : min= 152, max= 262, avg=207.84, stdev=30.81, samples=19 00:20:28.937 lat (msec) : 50=10.81%, 100=72.48%, 250=16.71% 00:20:28.937 cpu : usr=39.13%, sys=1.93%, ctx=1537, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename1: (groupid=0, jobs=1): err= 0: pid=83769: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=226, BW=907KiB/s (929kB/s)(9072KiB/10002msec) 00:20:28.937 slat (usec): min=3, max=8058, avg=21.18, stdev=183.57 00:20:28.937 clat (usec): min=997, max=180162, avg=70474.06, stdev=26111.69 00:20:28.937 lat (usec): min=1005, max=180173, avg=70495.24, stdev=26111.28 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 3], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 00:20:28.937 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 74], 00:20:28.937 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 117], 00:20:28.937 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 180], 00:20:28.937 | 99.99th=[ 180] 00:20:28.937 bw ( KiB/s): min= 672, max= 1128, per=4.32%, avg=870.21, stdev=145.10, samples=19 00:20:28.937 iops : min= 168, max= 282, avg=217.53, stdev=36.30, samples=19 00:20:28.937 lat (usec) : 1000=0.04% 00:20:28.937 lat (msec) : 2=0.35%, 4=1.41%, 10=1.28%, 20=0.26%, 50=17.99% 00:20:28.937 lat (msec) : 100=64.07%, 250=14.59% 00:20:28.937 cpu : usr=41.45%, sys=2.21%, ctx=1378, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=82.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=87.2%, 8=12.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename1: (groupid=0, jobs=1): err= 0: pid=83770: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=196, BW=784KiB/s (803kB/s)(7880KiB/10050msec) 00:20:28.937 slat (usec): min=7, max=4046, avg=26.13, stdev=181.02 00:20:28.937 clat (msec): min=3, max=156, avg=81.38, stdev=25.47 00:20:28.937 lat (msec): min=3, max=156, avg=81.40, stdev=25.47 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 64], 00:20:28.937 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 85], 00:20:28.937 | 70.00th=[ 92], 80.00th=[ 106], 90.00th=[ 113], 95.00th=[ 121], 00:20:28.937 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 157], 00:20:28.937 | 99.99th=[ 157] 00:20:28.937 bw ( KiB/s): min= 512, max= 1504, per=3.88%, avg=781.50, stdev=203.34, samples=20 00:20:28.937 iops : min= 128, max= 376, avg=195.35, stdev=50.85, samples=20 00:20:28.937 lat (msec) : 4=0.71%, 10=1.52%, 20=0.91%, 50=3.55%, 100=69.14% 00:20:28.937 lat (msec) : 250=24.16% 00:20:28.937 cpu : usr=41.32%, sys=2.26%, ctx=1306, majf=0, minf=9 00:20:28.937 IO depths : 1=0.1%, 2=4.3%, 4=16.8%, 8=64.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 complete : 0=0.0%, 4=92.2%, 8=4.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.937 issued rwts: total=1970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.937 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.937 filename1: (groupid=0, jobs=1): err= 0: pid=83771: Mon Jul 15 09:46:21 2024 00:20:28.937 read: IOPS=217, BW=869KiB/s (890kB/s)(8696KiB/10006msec) 00:20:28.937 slat (usec): min=4, max=8029, avg=30.22, stdev=279.16 00:20:28.937 clat (msec): min=8, max=136, avg=73.47, stdev=21.73 00:20:28.937 lat (msec): min=8, max=136, avg=73.50, stdev=21.72 00:20:28.937 clat percentiles (msec): 00:20:28.937 | 1.00th=[ 33], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:20:28.937 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:28.937 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 112], 00:20:28.937 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 138], 00:20:28.938 | 99.99th=[ 138] 00:20:28.938 bw ( KiB/s): min= 664, max= 1128, per=4.25%, avg=857.74, stdev=139.46, samples=19 00:20:28.938 iops : min= 166, max= 282, avg=214.42, stdev=34.87, samples=19 00:20:28.938 lat (msec) : 10=0.32%, 20=0.28%, 50=16.10%, 100=68.03%, 250=15.27% 00:20:28.938 cpu : usr=40.46%, sys=2.21%, ctx=1270, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename1: (groupid=0, jobs=1): err= 0: pid=83772: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=199, BW=799KiB/s (819kB/s)(7996KiB/10003msec) 00:20:28.938 slat (usec): min=4, max=8035, avg=24.44, stdev=253.58 00:20:28.938 clat (msec): min=4, max=155, avg=79.90, stdev=24.44 00:20:28.938 lat (msec): min=4, max=155, avg=79.92, stdev=24.44 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:20:28.938 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:20:28.938 | 70.00th=[ 94], 80.00th=[ 106], 90.00th=[ 115], 95.00th=[ 121], 00:20:28.938 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:20:28.938 | 99.99th=[ 155] 00:20:28.938 bw ( KiB/s): min= 512, max= 1000, per=3.89%, avg=783.11, stdev=125.40, samples=19 00:20:28.938 iops : min= 128, max= 250, avg=195.74, stdev=31.32, samples=19 00:20:28.938 lat (msec) : 10=0.80%, 50=10.61%, 100=68.08%, 250=20.51% 00:20:28.938 cpu : usr=31.26%, sys=1.83%, ctx=871, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=3.2%, 4=12.8%, 8=69.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=90.6%, 8=6.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=1999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename1: (groupid=0, jobs=1): err= 0: pid=83773: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=205, BW=820KiB/s (840kB/s)(8228KiB/10029msec) 00:20:28.938 slat (usec): min=6, max=8053, avg=27.91, stdev=306.48 00:20:28.938 clat (msec): min=37, max=147, avg=77.82, stdev=20.21 00:20:28.938 lat (msec): min=37, max=147, avg=77.85, stdev=20.22 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:20:28.938 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:28.938 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:20:28.938 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 148], 00:20:28.938 | 99.99th=[ 148] 00:20:28.938 bw ( KiB/s): min= 640, max= 1024, per=4.05%, avg=816.40, stdev=101.14, samples=20 00:20:28.938 iops : min= 160, max= 256, avg=204.10, stdev=25.29, samples=20 00:20:28.938 lat (msec) : 50=10.65%, 100=72.00%, 250=17.36% 00:20:28.938 cpu : usr=32.13%, sys=1.83%, ctx=882, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=2.3%, 4=9.2%, 8=73.6%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=89.8%, 8=8.2%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename1: (groupid=0, jobs=1): err= 0: pid=83774: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=219, BW=880KiB/s (901kB/s)(8808KiB/10013msec) 00:20:28.938 slat (usec): min=4, max=8027, avg=19.09, stdev=170.87 00:20:28.938 clat (msec): min=22, max=144, avg=72.64, stdev=22.04 00:20:28.938 lat (msec): min=22, max=144, avg=72.66, stdev=22.04 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:20:28.938 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:20:28.938 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 110], 00:20:28.938 | 99.00th=[ 122], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:28.938 | 99.99th=[ 144] 00:20:28.938 bw ( KiB/s): min= 712, max= 1072, per=4.35%, avg=876.55, stdev=120.25, samples=20 00:20:28.938 iops : min= 178, max= 268, avg=219.10, stdev=30.08, samples=20 00:20:28.938 lat (msec) : 50=20.75%, 100=65.58%, 250=13.67% 00:20:28.938 cpu : usr=32.17%, sys=1.55%, ctx=870, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename1: (groupid=0, jobs=1): err= 0: pid=83775: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=200, BW=804KiB/s (823kB/s)(8040KiB/10002msec) 00:20:28.938 slat (usec): min=3, max=8034, avg=37.17, stdev=409.14 00:20:28.938 clat (msec): min=2, max=178, avg=79.46, stdev=24.60 00:20:28.938 lat (msec): min=2, max=178, avg=79.50, stdev=24.61 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:20:28.938 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:20:28.938 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 120], 00:20:28.938 | 99.00th=[ 144], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 180], 00:20:28.938 | 99.99th=[ 180] 00:20:28.938 bw ( KiB/s): min= 528, max= 1024, per=3.86%, avg=777.74, stdev=121.55, samples=19 00:20:28.938 iops : min= 132, max= 256, avg=194.42, stdev=30.37, samples=19 00:20:28.938 lat (msec) : 4=0.30%, 10=1.44%, 20=0.15%, 50=9.65%, 100=68.11% 00:20:28.938 lat (msec) : 250=20.35% 00:20:28.938 cpu : usr=34.41%, sys=2.02%, ctx=1070, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=3.2%, 4=13.0%, 8=69.6%, 16=14.1%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=90.7%, 8=6.5%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename1: (groupid=0, jobs=1): err= 0: pid=83776: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=212, BW=850KiB/s (871kB/s)(8540KiB/10043msec) 00:20:28.938 slat (usec): min=6, max=8025, avg=23.73, stdev=214.43 00:20:28.938 clat (msec): min=16, max=151, avg=75.05, stdev=22.65 00:20:28.938 lat (msec): min=16, max=151, avg=75.08, stdev=22.65 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 30], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:20:28.938 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:20:28.938 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 116], 00:20:28.938 | 99.00th=[ 128], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 146], 00:20:28.938 | 99.99th=[ 153] 00:20:28.938 bw ( KiB/s): min= 616, max= 1133, per=4.20%, avg=847.35, stdev=147.66, samples=20 00:20:28.938 iops : min= 154, max= 283, avg=211.80, stdev=36.90, samples=20 00:20:28.938 lat (msec) : 20=0.75%, 50=13.96%, 100=67.82%, 250=17.47% 00:20:28.938 cpu : usr=38.72%, sys=1.99%, ctx=1294, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83777: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=216, BW=864KiB/s (885kB/s)(8656KiB/10015msec) 00:20:28.938 slat (usec): min=4, max=4023, avg=20.94, stdev=122.00 00:20:28.938 clat (msec): min=14, max=152, avg=73.93, stdev=21.72 00:20:28.938 lat (msec): min=14, max=153, avg=73.95, stdev=21.72 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:28.938 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:20:28.938 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 114], 00:20:28.938 | 99.00th=[ 123], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 153], 00:20:28.938 | 99.99th=[ 153] 00:20:28.938 bw ( KiB/s): min= 712, max= 1048, per=4.27%, avg=861.80, stdev=116.81, samples=20 00:20:28.938 iops : min= 178, max= 262, avg=215.40, stdev=29.25, samples=20 00:20:28.938 lat (msec) : 20=0.28%, 50=16.54%, 100=67.47%, 250=15.71% 00:20:28.938 cpu : usr=40.19%, sys=1.98%, ctx=1210, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83778: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=207, BW=830KiB/s (850kB/s)(8316KiB/10020msec) 00:20:28.938 slat (usec): min=4, max=8049, avg=23.80, stdev=249.04 00:20:28.938 clat (msec): min=23, max=152, avg=76.91, stdev=20.77 00:20:28.938 lat (msec): min=23, max=152, avg=76.93, stdev=20.77 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:28.938 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:20:28.938 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 115], 00:20:28.938 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 153], 00:20:28.938 | 99.99th=[ 153] 00:20:28.938 bw ( KiB/s): min= 640, max= 1024, per=4.10%, avg=827.25, stdev=100.77, samples=20 00:20:28.938 iops : min= 160, max= 256, avg=206.75, stdev=25.23, samples=20 00:20:28.938 lat (msec) : 50=12.75%, 100=70.47%, 250=16.79% 00:20:28.938 cpu : usr=31.40%, sys=1.68%, ctx=885, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=2.2%, 4=8.8%, 8=74.3%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=89.4%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83779: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=209, BW=837KiB/s (857kB/s)(8392KiB/10026msec) 00:20:28.938 slat (usec): min=5, max=8026, avg=19.55, stdev=175.01 00:20:28.938 clat (msec): min=35, max=168, avg=76.30, stdev=21.13 00:20:28.938 lat (msec): min=35, max=168, avg=76.32, stdev=21.13 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:28.938 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:28.938 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:20:28.938 | 99.00th=[ 121], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 144], 00:20:28.938 | 99.99th=[ 169] 00:20:28.938 bw ( KiB/s): min= 616, max= 1016, per=4.14%, avg=835.30, stdev=120.82, samples=20 00:20:28.938 iops : min= 154, max= 254, avg=208.80, stdev=30.17, samples=20 00:20:28.938 lat (msec) : 50=14.01%, 100=69.78%, 250=16.21% 00:20:28.938 cpu : usr=32.86%, sys=1.93%, ctx=908, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83780: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=197, BW=789KiB/s (808kB/s)(7908KiB/10028msec) 00:20:28.938 slat (usec): min=3, max=8030, avg=23.18, stdev=254.95 00:20:28.938 clat (msec): min=40, max=168, avg=80.97, stdev=21.96 00:20:28.938 lat (msec): min=40, max=168, avg=80.99, stdev=21.97 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 61], 00:20:28.938 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:20:28.938 | 70.00th=[ 88], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 121], 00:20:28.938 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 169], 00:20:28.938 | 99.99th=[ 169] 00:20:28.938 bw ( KiB/s): min= 624, max= 1005, per=3.90%, avg=786.65, stdev=105.59, samples=20 00:20:28.938 iops : min= 156, max= 251, avg=196.65, stdev=26.37, samples=20 00:20:28.938 lat (msec) : 50=7.44%, 100=71.07%, 250=21.50% 00:20:28.938 cpu : usr=32.57%, sys=1.67%, ctx=914, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=2.7%, 4=11.2%, 8=71.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=90.6%, 8=6.9%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=1977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83781: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=215, BW=861KiB/s (881kB/s)(8632KiB/10029msec) 00:20:28.938 slat (usec): min=4, max=4028, avg=21.22, stdev=149.58 00:20:28.938 clat (msec): min=23, max=143, avg=74.24, stdev=21.42 00:20:28.938 lat (msec): min=23, max=143, avg=74.26, stdev=21.42 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:20:28.938 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:20:28.938 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 115], 00:20:28.938 | 99.00th=[ 122], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:20:28.938 | 99.99th=[ 144] 00:20:28.938 bw ( KiB/s): min= 616, max= 1056, per=4.25%, avg=856.80, stdev=128.29, samples=20 00:20:28.938 iops : min= 154, max= 264, avg=214.20, stdev=32.07, samples=20 00:20:28.938 lat (msec) : 50=15.15%, 100=69.69%, 250=15.15% 00:20:28.938 cpu : usr=34.61%, sys=1.87%, ctx=1058, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83782: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=215, BW=862KiB/s (883kB/s)(8656KiB/10037msec) 00:20:28.938 slat (usec): min=7, max=4026, avg=21.53, stdev=149.41 00:20:28.938 clat (msec): min=22, max=138, avg=74.04, stdev=21.27 00:20:28.938 lat (msec): min=22, max=138, avg=74.06, stdev=21.27 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:20:28.938 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:20:28.938 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 114], 00:20:28.938 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 140], 00:20:28.938 | 99.99th=[ 140] 00:20:28.938 bw ( KiB/s): min= 672, max= 1096, per=4.26%, avg=859.10, stdev=116.52, samples=20 00:20:28.938 iops : min= 168, max= 274, avg=214.75, stdev=29.12, samples=20 00:20:28.938 lat (msec) : 50=14.23%, 100=70.89%, 250=14.88% 00:20:28.938 cpu : usr=43.06%, sys=2.04%, ctx=1298, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83783: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=211, BW=846KiB/s (866kB/s)(8468KiB/10010msec) 00:20:28.938 slat (usec): min=4, max=8035, avg=19.96, stdev=174.45 00:20:28.938 clat (msec): min=10, max=171, avg=75.52, stdev=22.67 00:20:28.938 lat (msec): min=10, max=171, avg=75.54, stdev=22.67 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:20:28.938 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:20:28.938 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 115], 00:20:28.938 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 171], 00:20:28.938 | 99.99th=[ 171] 00:20:28.938 bw ( KiB/s): min= 624, max= 1048, per=4.15%, avg=836.11, stdev=113.23, samples=19 00:20:28.938 iops : min= 156, max= 262, avg=209.00, stdev=28.33, samples=19 00:20:28.938 lat (msec) : 20=0.43%, 50=13.75%, 100=68.45%, 250=17.38% 00:20:28.938 cpu : usr=38.48%, sys=2.08%, ctx=1282, majf=0, minf=9 00:20:28.938 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.938 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.938 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.938 filename2: (groupid=0, jobs=1): err= 0: pid=83784: Mon Jul 15 09:46:21 2024 00:20:28.938 read: IOPS=212, BW=850KiB/s (871kB/s)(8524KiB/10027msec) 00:20:28.938 slat (usec): min=4, max=8044, avg=25.41, stdev=260.68 00:20:28.938 clat (msec): min=26, max=156, avg=75.10, stdev=21.99 00:20:28.938 lat (msec): min=26, max=156, avg=75.12, stdev=21.98 00:20:28.938 clat percentiles (msec): 00:20:28.938 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:20:28.938 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:20:28.938 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 117], 00:20:28.938 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:20:28.938 | 99.99th=[ 157] 00:20:28.938 bw ( KiB/s): min= 640, max= 1024, per=4.21%, avg=848.80, stdev=121.13, samples=20 00:20:28.938 iops : min= 160, max= 256, avg=212.20, stdev=30.28, samples=20 00:20:28.939 lat (msec) : 50=13.98%, 100=70.48%, 250=15.53% 00:20:28.939 cpu : usr=35.68%, sys=2.05%, ctx=1038, majf=0, minf=9 00:20:28.939 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:28.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.939 complete : 0=0.0%, 4=88.3%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.939 issued rwts: total=2131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.939 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:28.939 00:20:28.939 Run status group 0 (all jobs): 00:20:28.939 READ: bw=19.7MiB/s (20.6MB/s), 784KiB/s-907KiB/s (803kB/s-929kB/s), io=198MiB (207MB), run=10002-10051msec 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 bdev_null0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 [2024-07-15 09:46:22.266682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 bdev_null1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.939 { 00:20:28.939 "params": { 00:20:28.939 "name": "Nvme$subsystem", 00:20:28.939 "trtype": "$TEST_TRANSPORT", 00:20:28.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.939 "adrfam": "ipv4", 00:20:28.939 "trsvcid": "$NVMF_PORT", 00:20:28.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.939 "hdgst": ${hdgst:-false}, 00:20:28.939 "ddgst": ${ddgst:-false} 00:20:28.939 }, 00:20:28.939 "method": "bdev_nvme_attach_controller" 00:20:28.939 } 00:20:28.939 EOF 00:20:28.939 )") 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.939 { 00:20:28.939 "params": { 00:20:28.939 "name": "Nvme$subsystem", 00:20:28.939 "trtype": "$TEST_TRANSPORT", 00:20:28.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.939 "adrfam": "ipv4", 00:20:28.939 "trsvcid": "$NVMF_PORT", 00:20:28.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.939 "hdgst": ${hdgst:-false}, 00:20:28.939 "ddgst": ${ddgst:-false} 00:20:28.939 }, 00:20:28.939 "method": "bdev_nvme_attach_controller" 00:20:28.939 } 00:20:28.939 EOF 00:20:28.939 )") 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:28.939 "params": { 00:20:28.939 "name": "Nvme0", 00:20:28.939 "trtype": "tcp", 00:20:28.939 "traddr": "10.0.0.2", 00:20:28.939 "adrfam": "ipv4", 00:20:28.939 "trsvcid": "4420", 00:20:28.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:28.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:28.939 "hdgst": false, 00:20:28.939 "ddgst": false 00:20:28.939 }, 00:20:28.939 "method": "bdev_nvme_attach_controller" 00:20:28.939 },{ 00:20:28.939 "params": { 00:20:28.939 "name": "Nvme1", 00:20:28.939 "trtype": "tcp", 00:20:28.939 "traddr": "10.0.0.2", 00:20:28.939 "adrfam": "ipv4", 00:20:28.939 "trsvcid": "4420", 00:20:28.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.939 "hdgst": false, 00:20:28.939 "ddgst": false 00:20:28.939 }, 00:20:28.939 "method": "bdev_nvme_attach_controller" 00:20:28.939 }' 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:28.939 09:46:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:28.939 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:28.939 ... 00:20:28.939 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:28.939 ... 00:20:28.939 fio-3.35 00:20:28.939 Starting 4 threads 00:20:34.201 00:20:34.201 filename0: (groupid=0, jobs=1): err= 0: pid=83923: Mon Jul 15 09:46:28 2024 00:20:34.201 read: IOPS=2112, BW=16.5MiB/s (17.3MB/s)(82.5MiB/5001msec) 00:20:34.201 slat (nsec): min=7020, max=85787, avg=19543.44, stdev=7972.44 00:20:34.201 clat (usec): min=923, max=7121, avg=3732.75, stdev=871.92 00:20:34.201 lat (usec): min=931, max=7160, avg=3752.29, stdev=871.52 00:20:34.201 clat percentiles (usec): 00:20:34.201 | 1.00th=[ 1319], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 3064], 00:20:34.201 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3785], 60.00th=[ 3884], 00:20:34.201 | 70.00th=[ 4047], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5145], 00:20:34.201 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5604], 99.95th=[ 5997], 00:20:34.201 | 99.99th=[ 6980] 00:20:34.201 bw ( KiB/s): min=16256, max=17776, per=25.97%, avg=16903.22, stdev=558.33, samples=9 00:20:34.201 iops : min= 2032, max= 2222, avg=2112.89, stdev=69.79, samples=9 00:20:34.201 lat (usec) : 1000=0.05% 00:20:34.201 lat (msec) : 2=1.91%, 4=67.36%, 10=30.68% 00:20:34.201 cpu : usr=92.38%, sys=6.60%, ctx=6, majf=0, minf=10 00:20:34.201 IO depths : 1=0.2%, 2=5.6%, 4=63.4%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 complete : 0=0.0%, 4=97.8%, 8=2.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 issued rwts: total=10563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.201 filename0: (groupid=0, jobs=1): err= 0: pid=83924: Mon Jul 15 09:46:28 2024 00:20:34.201 read: IOPS=2065, BW=16.1MiB/s (16.9MB/s)(80.7MiB/5004msec) 00:20:34.201 slat (nsec): min=3816, max=63388, avg=14865.72, stdev=8075.46 00:20:34.201 clat (usec): min=571, max=7863, avg=3828.86, stdev=866.05 00:20:34.201 lat (usec): min=586, max=7882, avg=3843.73, stdev=865.61 00:20:34.201 clat percentiles (usec): 00:20:34.201 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2671], 20.00th=[ 3294], 00:20:34.201 | 30.00th=[ 3359], 40.00th=[ 3458], 50.00th=[ 3851], 60.00th=[ 3916], 00:20:34.201 | 70.00th=[ 4424], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5145], 00:20:34.201 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 6980], 99.95th=[ 7570], 00:20:34.201 | 99.99th=[ 7570] 00:20:34.201 bw ( KiB/s): min=15536, max=17024, per=25.38%, avg=16517.33, stdev=466.13, samples=9 00:20:34.201 iops : min= 1942, max= 2128, avg=2064.67, stdev=58.27, samples=9 00:20:34.201 lat (usec) : 750=0.01%, 1000=0.02% 00:20:34.201 lat (msec) : 2=1.21%, 4=64.50%, 10=34.26% 00:20:34.201 cpu : usr=92.64%, sys=6.44%, ctx=5, majf=0, minf=0 00:20:34.201 IO depths : 1=0.1%, 2=7.2%, 4=62.7%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 issued rwts: total=10335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.201 filename1: (groupid=0, jobs=1): err= 0: pid=83925: Mon Jul 15 09:46:28 2024 00:20:34.201 read: IOPS=1867, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5002msec) 00:20:34.201 slat (usec): min=7, max=117, avg=16.99, stdev= 7.33 00:20:34.201 clat (usec): min=1312, max=7214, avg=4224.69, stdev=775.86 00:20:34.201 lat (usec): min=1321, max=7244, avg=4241.68, stdev=776.41 00:20:34.201 clat percentiles (usec): 00:20:34.201 | 1.00th=[ 2278], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3392], 00:20:34.201 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 4686], 00:20:34.201 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5211], 00:20:34.201 | 99.00th=[ 5932], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6259], 00:20:34.201 | 99.99th=[ 7242] 00:20:34.201 bw ( KiB/s): min=12672, max=16544, per=22.65%, avg=14744.89, stdev=1791.00, samples=9 00:20:34.201 iops : min= 1584, max= 2068, avg=1843.11, stdev=223.88, samples=9 00:20:34.201 lat (msec) : 2=0.34%, 4=51.04%, 10=48.61% 00:20:34.201 cpu : usr=92.48%, sys=6.66%, ctx=22, majf=0, minf=0 00:20:34.201 IO depths : 1=0.3%, 2=14.6%, 4=58.8%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 issued rwts: total=9339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.201 filename1: (groupid=0, jobs=1): err= 0: pid=83926: Mon Jul 15 09:46:28 2024 00:20:34.201 read: IOPS=2094, BW=16.4MiB/s (17.2MB/s)(81.8MiB/5002msec) 00:20:34.201 slat (usec): min=5, max=288, avg=19.39, stdev= 8.74 00:20:34.201 clat (usec): min=1274, max=6961, avg=3764.26, stdev=847.07 00:20:34.201 lat (usec): min=1282, max=6987, avg=3783.65, stdev=846.17 00:20:34.201 clat percentiles (usec): 00:20:34.201 | 1.00th=[ 1991], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 3064], 00:20:34.201 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3785], 60.00th=[ 3884], 00:20:34.201 | 70.00th=[ 4228], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5145], 00:20:34.201 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 5800], 99.95th=[ 5866], 00:20:34.201 | 99.99th=[ 6915] 00:20:34.201 bw ( KiB/s): min=15920, max=17584, per=25.73%, avg=16750.22, stdev=572.63, samples=9 00:20:34.201 iops : min= 1990, max= 2198, avg=2093.78, stdev=71.58, samples=9 00:20:34.201 lat (msec) : 2=1.06%, 4=66.80%, 10=32.14% 00:20:34.201 cpu : usr=92.20%, sys=6.32%, ctx=73, majf=0, minf=9 00:20:34.201 IO depths : 1=0.2%, 2=6.2%, 4=63.1%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.201 issued rwts: total=10475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:34.201 00:20:34.201 Run status group 0 (all jobs): 00:20:34.201 READ: bw=63.6MiB/s (66.6MB/s), 14.6MiB/s-16.5MiB/s (15.3MB/s-17.3MB/s), io=318MiB (334MB), run=5001-5004msec 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 ************************************ 00:20:34.201 END TEST fio_dif_rand_params 00:20:34.201 ************************************ 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 00:20:34.201 real 0m23.484s 00:20:34.201 user 2m3.782s 00:20:34.201 sys 0m8.077s 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 09:46:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:34.201 09:46:28 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:34.201 09:46:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:34.201 09:46:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 ************************************ 00:20:34.201 START TEST fio_dif_digest 00:20:34.201 ************************************ 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 bdev_null0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:34.201 [2024-07-15 09:46:28.429440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:34.201 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:34.201 { 00:20:34.201 "params": { 00:20:34.201 "name": "Nvme$subsystem", 00:20:34.201 "trtype": "$TEST_TRANSPORT", 00:20:34.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.201 "adrfam": "ipv4", 00:20:34.201 "trsvcid": "$NVMF_PORT", 00:20:34.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.201 "hdgst": ${hdgst:-false}, 00:20:34.202 "ddgst": ${ddgst:-false} 00:20:34.202 }, 00:20:34.202 "method": "bdev_nvme_attach_controller" 00:20:34.202 } 00:20:34.202 EOF 00:20:34.202 )") 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:34.202 "params": { 00:20:34.202 "name": "Nvme0", 00:20:34.202 "trtype": "tcp", 00:20:34.202 "traddr": "10.0.0.2", 00:20:34.202 "adrfam": "ipv4", 00:20:34.202 "trsvcid": "4420", 00:20:34.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.202 "hdgst": true, 00:20:34.202 "ddgst": true 00:20:34.202 }, 00:20:34.202 "method": "bdev_nvme_attach_controller" 00:20:34.202 }' 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.202 09:46:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.202 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:34.202 ... 00:20:34.202 fio-3.35 00:20:34.202 Starting 3 threads 00:20:46.391 00:20:46.391 filename0: (groupid=0, jobs=1): err= 0: pid=84032: Mon Jul 15 09:46:39 2024 00:20:46.391 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10006msec) 00:20:46.391 slat (nsec): min=4489, max=67381, avg=20163.56, stdev=10066.31 00:20:46.391 clat (usec): min=12931, max=18303, avg=13242.88, stdev=274.85 00:20:46.391 lat (usec): min=12941, max=18328, avg=13263.04, stdev=278.18 00:20:46.391 clat percentiles (usec): 00:20:46.391 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:20:46.391 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:20:46.391 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13566], 00:20:46.391 | 99.00th=[13698], 99.50th=[13960], 99.90th=[18220], 99.95th=[18220], 00:20:46.391 | 99.99th=[18220] 00:20:46.391 bw ( KiB/s): min=28416, max=29952, per=33.33%, avg=28897.95, stdev=456.87, samples=19 00:20:46.391 iops : min= 222, max= 234, avg=225.74, stdev= 3.56, samples=19 00:20:46.391 lat (msec) : 20=100.00% 00:20:46.392 cpu : usr=91.23%, sys=8.15%, ctx=13, majf=0, minf=0 00:20:46.392 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.392 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.392 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:46.392 filename0: (groupid=0, jobs=1): err= 0: pid=84033: Mon Jul 15 09:46:39 2024 00:20:46.392 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10003msec) 00:20:46.392 slat (usec): min=7, max=110, avg=20.83, stdev=10.42 00:20:46.392 clat (usec): min=12937, max=15338, avg=13237.76, stdev=211.93 00:20:46.392 lat (usec): min=12945, max=15361, avg=13258.60, stdev=216.10 00:20:46.392 clat percentiles (usec): 00:20:46.392 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:20:46.392 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:20:46.392 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13566], 00:20:46.392 | 99.00th=[13698], 99.50th=[13960], 99.90th=[15270], 99.95th=[15270], 00:20:46.392 | 99.99th=[15401] 00:20:46.392 bw ( KiB/s): min=28359, max=29952, per=33.34%, avg=28901.00, stdev=459.14, samples=19 00:20:46.392 iops : min= 221, max= 234, avg=225.74, stdev= 3.65, samples=19 00:20:46.392 lat (msec) : 20=100.00% 00:20:46.392 cpu : usr=93.08%, sys=6.23%, ctx=87, majf=0, minf=9 00:20:46.392 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.392 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.392 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:46.392 filename0: (groupid=0, jobs=1): err= 0: pid=84034: Mon Jul 15 09:46:39 2024 00:20:46.392 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(282MiB/10004msec) 00:20:46.392 slat (nsec): min=7492, max=93813, avg=20801.34, stdev=9316.04 00:20:46.392 clat (usec): min=10836, max=16559, avg=13238.49, stdev=273.40 00:20:46.392 lat (usec): min=10845, max=16575, avg=13259.29, stdev=276.63 00:20:46.392 clat percentiles (usec): 00:20:46.392 | 1.00th=[12911], 5.00th=[13042], 10.00th=[13042], 20.00th=[13042], 00:20:46.392 | 30.00th=[13042], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:20:46.392 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13566], 00:20:46.392 | 99.00th=[13698], 99.50th=[14222], 99.90th=[16581], 99.95th=[16581], 00:20:46.392 | 99.99th=[16581] 00:20:46.392 bw ( KiB/s): min=28359, max=29952, per=33.33%, avg=28898.05, stdev=462.22, samples=19 00:20:46.392 iops : min= 221, max= 234, avg=225.74, stdev= 3.65, samples=19 00:20:46.392 lat (msec) : 20=100.00% 00:20:46.392 cpu : usr=92.71%, sys=6.71%, ctx=20, majf=0, minf=0 00:20:46.392 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.392 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.392 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:46.392 00:20:46.392 Run status group 0 (all jobs): 00:20:46.392 READ: bw=84.7MiB/s (88.8MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=847MiB (888MB), run=10003-10006msec 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.392 00:20:46.392 real 0m10.982s 00:20:46.392 user 0m28.330s 00:20:46.392 sys 0m2.380s 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.392 ************************************ 00:20:46.392 END TEST fio_dif_digest 00:20:46.392 ************************************ 00:20:46.392 09:46:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:46.392 09:46:39 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:46.392 09:46:39 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.392 rmmod nvme_tcp 00:20:46.392 rmmod nvme_fabrics 00:20:46.392 rmmod nvme_keyring 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83275 ']' 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83275 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83275 ']' 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83275 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83275 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:46.392 killing process with pid 83275 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83275' 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83275 00:20:46.392 09:46:39 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83275 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:20:46.392 09:46:39 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:46.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:46.392 Waiting for block devices as requested 00:20:46.392 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.392 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.392 09:46:40 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.392 09:46:40 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.392 09:46:40 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.392 09:46:40 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.392 09:46:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.392 09:46:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:46.392 09:46:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.392 09:46:40 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:46.392 00:20:46.392 real 0m59.687s 00:20:46.392 user 3m47.985s 00:20:46.392 sys 0m19.148s 00:20:46.392 09:46:40 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.392 ************************************ 00:20:46.392 END TEST nvmf_dif 00:20:46.392 ************************************ 00:20:46.392 09:46:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:46.392 09:46:40 -- common/autotest_common.sh@1142 -- # return 0 00:20:46.392 09:46:40 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:46.392 09:46:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:46.392 09:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.392 09:46:40 -- common/autotest_common.sh@10 -- # set +x 00:20:46.392 ************************************ 00:20:46.392 START TEST nvmf_abort_qd_sizes 00:20:46.392 ************************************ 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:46.392 * Looking for test storage... 00:20:46.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.392 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:46.393 Cannot find device "nvmf_tgt_br" 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.393 Cannot find device "nvmf_tgt_br2" 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:46.393 Cannot find device "nvmf_tgt_br" 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:46.393 Cannot find device "nvmf_tgt_br2" 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:46.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:20:46.393 00:20:46.393 --- 10.0.0.2 ping statistics --- 00:20:46.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.393 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:46.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:46.393 00:20:46.393 --- 10.0.0.3 ping statistics --- 00:20:46.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.393 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:46.393 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:46.393 00:20:46.393 --- 10.0.0.1 ping statistics --- 00:20:46.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.393 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:46.394 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.394 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:20:46.394 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:46.394 09:46:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:47.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:47.328 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.328 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84619 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84619 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84619 ']' 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.328 09:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:47.328 [2024-07-15 09:46:41.760533] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:47.328 [2024-07-15 09:46:41.760629] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.585 [2024-07-15 09:46:41.899839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.585 [2024-07-15 09:46:42.033970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.585 [2024-07-15 09:46:42.034261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.585 [2024-07-15 09:46:42.034405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.585 [2024-07-15 09:46:42.034467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.585 [2024-07-15 09:46:42.034586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.585 [2024-07-15 09:46:42.034821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.585 [2024-07-15 09:46:42.034952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.585 [2024-07-15 09:46:42.035745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.585 [2024-07-15 09:46:42.035794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.842 [2024-07-15 09:46:42.093926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:48.471 09:46:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:48.471 ************************************ 00:20:48.471 START TEST spdk_target_abort 00:20:48.471 ************************************ 00:20:48.471 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:20:48.471 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:48.471 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:48.471 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.471 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.729 spdk_targetn1 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.729 [2024-07-15 09:46:42.963483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:48.729 [2024-07-15 09:46:42.991678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:48.729 09:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:48.729 09:46:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:52.007 Initializing NVMe Controllers 00:20:52.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:52.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:52.007 Initialization complete. Launching workers. 00:20:52.007 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10515, failed: 0 00:20:52.007 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1014, failed to submit 9501 00:20:52.007 success 756, unsuccess 258, failed 0 00:20:52.007 09:46:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:52.007 09:46:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:55.407 Initializing NVMe Controllers 00:20:55.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:55.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:55.407 Initialization complete. Launching workers. 00:20:55.407 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 00:20:55.407 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1170, failed to submit 7806 00:20:55.407 success 376, unsuccess 794, failed 0 00:20:55.407 09:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:55.407 09:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:58.717 Initializing NVMe Controllers 00:20:58.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:58.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:58.717 Initialization complete. Launching workers. 00:20:58.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31001, failed: 0 00:20:58.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2303, failed to submit 28698 00:20:58.717 success 437, unsuccess 1866, failed 0 00:20:58.717 09:46:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:58.717 09:46:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.717 09:46:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:58.717 09:46:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.717 09:46:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:58.717 09:46:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.717 09:46:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84619 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84619 ']' 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84619 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84619 00:20:58.974 killing process with pid 84619 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84619' 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84619 00:20:58.974 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84619 00:20:59.232 00:20:59.232 ************************************ 00:20:59.232 END TEST spdk_target_abort 00:20:59.232 ************************************ 00:20:59.232 real 0m10.808s 00:20:59.232 user 0m43.439s 00:20:59.232 sys 0m2.127s 00:20:59.232 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.232 09:46:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:59.491 09:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:20:59.491 09:46:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:59.491 09:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:59.491 09:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.491 09:46:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:59.491 ************************************ 00:20:59.491 START TEST kernel_target_abort 00:20:59.491 ************************************ 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:59.491 09:46:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:59.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:59.749 Waiting for block devices as requested 00:20:59.749 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.007 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:00.007 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:00.007 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:00.007 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:00.007 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:00.007 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:00.007 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:00.007 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:00.008 No valid GPT data, bailing 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:00.008 No valid GPT data, bailing 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:00.008 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:00.267 No valid GPT data, bailing 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:00.267 No valid GPT data, bailing 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da --hostid=d2f81337-7559-423d-93ce-5836d202b6da -a 10.0.0.1 -t tcp -s 4420 00:21:00.267 00:21:00.267 Discovery Log Number of Records 2, Generation counter 2 00:21:00.267 =====Discovery Log Entry 0====== 00:21:00.267 trtype: tcp 00:21:00.267 adrfam: ipv4 00:21:00.267 subtype: current discovery subsystem 00:21:00.267 treq: not specified, sq flow control disable supported 00:21:00.267 portid: 1 00:21:00.267 trsvcid: 4420 00:21:00.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:00.267 traddr: 10.0.0.1 00:21:00.267 eflags: none 00:21:00.267 sectype: none 00:21:00.267 =====Discovery Log Entry 1====== 00:21:00.267 trtype: tcp 00:21:00.267 adrfam: ipv4 00:21:00.267 subtype: nvme subsystem 00:21:00.267 treq: not specified, sq flow control disable supported 00:21:00.267 portid: 1 00:21:00.267 trsvcid: 4420 00:21:00.267 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:00.267 traddr: 10.0.0.1 00:21:00.267 eflags: none 00:21:00.267 sectype: none 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:00.267 09:46:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:03.550 Initializing NVMe Controllers 00:21:03.550 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:03.550 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:03.550 Initialization complete. Launching workers. 00:21:03.550 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34359, failed: 0 00:21:03.550 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34359, failed to submit 0 00:21:03.550 success 0, unsuccess 34359, failed 0 00:21:03.550 09:46:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:03.550 09:46:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:06.826 Initializing NVMe Controllers 00:21:06.826 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:06.826 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:06.826 Initialization complete. Launching workers. 00:21:06.826 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72120, failed: 0 00:21:06.826 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31648, failed to submit 40472 00:21:06.826 success 0, unsuccess 31648, failed 0 00:21:06.826 09:47:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:06.826 09:47:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:10.111 Initializing NVMe Controllers 00:21:10.111 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:10.111 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:10.111 Initialization complete. Launching workers. 00:21:10.111 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86442, failed: 0 00:21:10.111 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21590, failed to submit 64852 00:21:10.111 success 0, unsuccess 21590, failed 0 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:10.111 09:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:10.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:12.576 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:12.576 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:12.834 00:21:12.834 real 0m13.339s 00:21:12.834 user 0m6.382s 00:21:12.834 sys 0m4.395s 00:21:12.834 09:47:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:12.834 09:47:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:12.834 ************************************ 00:21:12.834 END TEST kernel_target_abort 00:21:12.834 ************************************ 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.834 rmmod nvme_tcp 00:21:12.834 rmmod nvme_fabrics 00:21:12.834 rmmod nvme_keyring 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84619 ']' 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84619 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84619 ']' 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84619 00:21:12.834 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84619) - No such process 00:21:12.834 Process with pid 84619 is not found 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84619 is not found' 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:12.834 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:13.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:13.350 Waiting for block devices as requested 00:21:13.350 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:13.350 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:13.350 00:21:13.350 real 0m27.388s 00:21:13.350 user 0m51.072s 00:21:13.350 sys 0m7.781s 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.350 09:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:13.350 ************************************ 00:21:13.350 END TEST nvmf_abort_qd_sizes 00:21:13.350 ************************************ 00:21:13.608 09:47:07 -- common/autotest_common.sh@1142 -- # return 0 00:21:13.608 09:47:07 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:13.608 09:47:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:13.608 09:47:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.608 09:47:07 -- common/autotest_common.sh@10 -- # set +x 00:21:13.608 ************************************ 00:21:13.608 START TEST keyring_file 00:21:13.608 ************************************ 00:21:13.608 09:47:07 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:13.608 * Looking for test storage... 00:21:13.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.608 09:47:07 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.608 09:47:07 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.608 09:47:07 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.608 09:47:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.608 09:47:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.608 09:47:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.608 09:47:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:13.608 09:47:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VWkUH16p1g 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VWkUH16p1g 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VWkUH16p1g 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VWkUH16p1g 00:21:13.608 09:47:07 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5BEnHiI8in 00:21:13.608 09:47:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:13.608 09:47:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:13.608 09:47:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5BEnHiI8in 00:21:13.608 09:47:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5BEnHiI8in 00:21:13.608 09:47:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5BEnHiI8in 00:21:13.608 09:47:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=85487 00:21:13.609 09:47:08 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.609 09:47:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85487 00:21:13.609 09:47:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85487 ']' 00:21:13.609 09:47:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.609 09:47:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.609 09:47:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.609 09:47:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.609 09:47:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:13.866 [2024-07-15 09:47:08.115528] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:13.866 [2024-07-15 09:47:08.115629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85487 ] 00:21:13.866 [2024-07-15 09:47:08.256213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.125 [2024-07-15 09:47:08.373627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.125 [2024-07-15 09:47:08.430611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:14.691 09:47:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.691 09:47:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:14.691 09:47:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:14.691 09:47:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.691 09:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:14.691 [2024-07-15 09:47:09.072830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.691 null0 00:21:14.691 [2024-07-15 09:47:09.104762] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.691 [2024-07-15 09:47:09.105003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:14.691 [2024-07-15 09:47:09.112796] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.691 09:47:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.692 09:47:09 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:14.692 [2024-07-15 09:47:09.124799] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:14.692 request: 00:21:14.692 { 00:21:14.692 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.692 "secure_channel": false, 00:21:14.692 "listen_address": { 00:21:14.692 "trtype": "tcp", 00:21:14.692 "traddr": "127.0.0.1", 00:21:14.692 "trsvcid": "4420" 00:21:14.692 }, 00:21:14.692 "method": "nvmf_subsystem_add_listener", 00:21:14.692 "req_id": 1 00:21:14.692 } 00:21:14.692 Got JSON-RPC error response 00:21:14.692 response: 00:21:14.692 { 00:21:14.692 "code": -32602, 00:21:14.692 "message": "Invalid parameters" 00:21:14.692 } 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.692 09:47:09 keyring_file -- keyring/file.sh@46 -- # bperfpid=85504 00:21:14.692 09:47:09 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85504 /var/tmp/bperf.sock 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85504 ']' 00:21:14.692 09:47:09 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.692 09:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:14.951 [2024-07-15 09:47:09.183449] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:14.951 [2024-07-15 09:47:09.183553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85504 ] 00:21:14.951 [2024-07-15 09:47:09.318265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.250 [2024-07-15 09:47:09.438274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.250 [2024-07-15 09:47:09.494774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:15.817 09:47:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.817 09:47:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:15.817 09:47:10 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:15.817 09:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:16.076 09:47:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5BEnHiI8in 00:21:16.076 09:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5BEnHiI8in 00:21:16.334 09:47:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:16.334 09:47:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:16.334 09:47:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.334 09:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.334 09:47:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:16.593 09:47:10 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.VWkUH16p1g == \/\t\m\p\/\t\m\p\.\V\W\k\U\H\1\6\p\1\g ]] 00:21:16.593 09:47:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:16.593 09:47:10 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:16.593 09:47:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.593 09:47:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:16.593 09:47:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.851 09:47:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5BEnHiI8in == \/\t\m\p\/\t\m\p\.\5\B\E\n\H\i\I\8\i\n ]] 00:21:16.851 09:47:11 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:16.851 09:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:16.851 09:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:16.851 09:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:16.851 09:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:16.851 09:47:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:17.109 09:47:11 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:17.109 09:47:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:17.109 09:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:17.109 09:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:17.109 09:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:17.109 09:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:17.109 09:47:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:17.368 09:47:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:17.368 09:47:11 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:17.368 09:47:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:17.627 [2024-07-15 09:47:12.039417] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.885 nvme0n1 00:21:17.885 09:47:12 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:17.885 09:47:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:17.885 09:47:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:17.885 09:47:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:17.885 09:47:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:17.885 09:47:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.143 09:47:12 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:18.143 09:47:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:18.143 09:47:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:18.143 09:47:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:18.143 09:47:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:18.143 09:47:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:18.143 09:47:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.402 09:47:12 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:18.402 09:47:12 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:18.402 Running I/O for 1 seconds... 00:21:19.391 00:21:19.391 Latency(us) 00:21:19.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.391 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:19.391 nvme0n1 : 1.01 12027.93 46.98 0.00 0.00 10604.13 5600.35 17754.30 00:21:19.391 =================================================================================================================== 00:21:19.391 Total : 12027.93 46.98 0.00 0.00 10604.13 5600.35 17754.30 00:21:19.391 0 00:21:19.391 09:47:13 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:19.391 09:47:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:19.648 09:47:14 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:19.648 09:47:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:19.648 09:47:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:19.648 09:47:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:19.648 09:47:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.648 09:47:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:20.215 09:47:14 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:20.215 09:47:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:20.215 09:47:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:20.215 09:47:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:20.215 09:47:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:20.215 09:47:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.215 09:47:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:20.474 09:47:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:20.474 09:47:14 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:20.474 09:47:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:20.474 09:47:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:20.474 09:47:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:20.474 09:47:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.474 09:47:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:20.474 09:47:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.474 09:47:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:20.474 09:47:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:20.474 [2024-07-15 09:47:14.920927] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.474 [2024-07-15 09:47:14.921703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c74f0 (107): Transport endpoint is not connected 00:21:20.474 [2024-07-15 09:47:14.922693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c74f0 (9): Bad file descriptor 00:21:20.474 [2024-07-15 09:47:14.923690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:20.474 [2024-07-15 09:47:14.923712] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:20.474 [2024-07-15 09:47:14.923724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:20.474 request: 00:21:20.474 { 00:21:20.474 "name": "nvme0", 00:21:20.474 "trtype": "tcp", 00:21:20.474 "traddr": "127.0.0.1", 00:21:20.474 "adrfam": "ipv4", 00:21:20.474 "trsvcid": "4420", 00:21:20.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:20.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:20.474 "prchk_reftag": false, 00:21:20.474 "prchk_guard": false, 00:21:20.474 "hdgst": false, 00:21:20.474 "ddgst": false, 00:21:20.474 "psk": "key1", 00:21:20.474 "method": "bdev_nvme_attach_controller", 00:21:20.474 "req_id": 1 00:21:20.474 } 00:21:20.474 Got JSON-RPC error response 00:21:20.474 response: 00:21:20.474 { 00:21:20.474 "code": -5, 00:21:20.474 "message": "Input/output error" 00:21:20.474 } 00:21:20.734 09:47:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:20.734 09:47:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.734 09:47:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.734 09:47:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.734 09:47:14 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:20.734 09:47:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:20.734 09:47:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:20.734 09:47:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:20.734 09:47:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:20.734 09:47:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.991 09:47:15 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:20.991 09:47:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:20.991 09:47:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:20.991 09:47:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:20.991 09:47:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:20.991 09:47:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:20.991 09:47:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:21.249 09:47:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:21.249 09:47:15 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:21.249 09:47:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:21.507 09:47:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:21.507 09:47:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:21.507 09:47:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:21.507 09:47:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:21.507 09:47:15 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:22.099 09:47:16 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:22.099 09:47:16 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.VWkUH16p1g 00:21:22.099 09:47:16 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:22.099 09:47:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:22.099 [2024-07-15 09:47:16.475050] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VWkUH16p1g': 0100660 00:21:22.099 [2024-07-15 09:47:16.475103] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:22.099 request: 00:21:22.099 { 00:21:22.099 "name": "key0", 00:21:22.099 "path": "/tmp/tmp.VWkUH16p1g", 00:21:22.099 "method": "keyring_file_add_key", 00:21:22.099 "req_id": 1 00:21:22.099 } 00:21:22.099 Got JSON-RPC error response 00:21:22.099 response: 00:21:22.099 { 00:21:22.099 "code": -1, 00:21:22.099 "message": "Operation not permitted" 00:21:22.099 } 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:22.099 09:47:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:22.099 09:47:16 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.VWkUH16p1g 00:21:22.099 09:47:16 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:22.099 09:47:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VWkUH16p1g 00:21:22.355 09:47:16 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.VWkUH16p1g 00:21:22.355 09:47:16 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:22.355 09:47:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:22.355 09:47:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:22.355 09:47:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:22.355 09:47:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:22.355 09:47:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:22.611 09:47:16 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:22.611 09:47:16 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:22.611 09:47:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:22.611 09:47:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:22.611 09:47:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:22.611 09:47:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.611 09:47:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:22.611 09:47:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:22.611 09:47:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:22.611 09:47:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:22.885 [2024-07-15 09:47:17.215216] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VWkUH16p1g': No such file or directory 00:21:22.885 [2024-07-15 09:47:17.215262] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:22.885 [2024-07-15 09:47:17.215288] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:22.885 [2024-07-15 09:47:17.215297] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:22.885 [2024-07-15 09:47:17.215306] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:22.885 request: 00:21:22.885 { 00:21:22.885 "name": "nvme0", 00:21:22.885 "trtype": "tcp", 00:21:22.885 "traddr": "127.0.0.1", 00:21:22.885 "adrfam": "ipv4", 00:21:22.885 "trsvcid": "4420", 00:21:22.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:22.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:22.885 "prchk_reftag": false, 00:21:22.885 "prchk_guard": false, 00:21:22.885 "hdgst": false, 00:21:22.885 "ddgst": false, 00:21:22.885 "psk": "key0", 00:21:22.885 "method": "bdev_nvme_attach_controller", 00:21:22.885 "req_id": 1 00:21:22.885 } 00:21:22.885 Got JSON-RPC error response 00:21:22.885 response: 00:21:22.885 { 00:21:22.885 "code": -19, 00:21:22.885 "message": "No such device" 00:21:22.885 } 00:21:22.885 09:47:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:22.885 09:47:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:22.886 09:47:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:22.886 09:47:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:22.886 09:47:17 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:22.886 09:47:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:23.142 09:47:17 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dGQZZBJvaj 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:23.142 09:47:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:23.142 09:47:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.142 09:47:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:23.142 09:47:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:23.142 09:47:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:23.142 09:47:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dGQZZBJvaj 00:21:23.142 09:47:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dGQZZBJvaj 00:21:23.142 09:47:17 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dGQZZBJvaj 00:21:23.142 09:47:17 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dGQZZBJvaj 00:21:23.143 09:47:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dGQZZBJvaj 00:21:23.399 09:47:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:23.399 09:47:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:23.657 nvme0n1 00:21:23.913 09:47:18 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:23.913 09:47:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:23.913 09:47:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:23.913 09:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:23.913 09:47:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:23.913 09:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:23.913 09:47:18 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:23.913 09:47:18 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:23.913 09:47:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:24.170 09:47:18 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:24.170 09:47:18 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:24.170 09:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:24.170 09:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.170 09:47:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.426 09:47:18 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:24.426 09:47:18 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:24.426 09:47:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:24.426 09:47:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:24.426 09:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:24.426 09:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:24.426 09:47:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:24.682 09:47:19 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:24.682 09:47:19 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:24.682 09:47:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:24.939 09:47:19 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:24.939 09:47:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:24.939 09:47:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:25.196 09:47:19 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:25.196 09:47:19 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dGQZZBJvaj 00:21:25.196 09:47:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dGQZZBJvaj 00:21:25.454 09:47:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5BEnHiI8in 00:21:25.454 09:47:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5BEnHiI8in 00:21:25.712 09:47:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:25.712 09:47:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:25.971 nvme0n1 00:21:25.971 09:47:20 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:25.971 09:47:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:26.540 09:47:20 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:26.540 "subsystems": [ 00:21:26.540 { 00:21:26.540 "subsystem": "keyring", 00:21:26.540 "config": [ 00:21:26.540 { 00:21:26.540 "method": "keyring_file_add_key", 00:21:26.540 "params": { 00:21:26.540 "name": "key0", 00:21:26.540 "path": "/tmp/tmp.dGQZZBJvaj" 00:21:26.540 } 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "method": "keyring_file_add_key", 00:21:26.540 "params": { 00:21:26.540 "name": "key1", 00:21:26.540 "path": "/tmp/tmp.5BEnHiI8in" 00:21:26.540 } 00:21:26.540 } 00:21:26.540 ] 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "subsystem": "iobuf", 00:21:26.540 "config": [ 00:21:26.540 { 00:21:26.540 "method": "iobuf_set_options", 00:21:26.540 "params": { 00:21:26.540 "small_pool_count": 8192, 00:21:26.540 "large_pool_count": 1024, 00:21:26.540 "small_bufsize": 8192, 00:21:26.540 "large_bufsize": 135168 00:21:26.540 } 00:21:26.540 } 00:21:26.540 ] 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "subsystem": "sock", 00:21:26.540 "config": [ 00:21:26.540 { 00:21:26.540 "method": "sock_set_default_impl", 00:21:26.540 "params": { 00:21:26.540 "impl_name": "uring" 00:21:26.540 } 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "method": "sock_impl_set_options", 00:21:26.540 "params": { 00:21:26.540 "impl_name": "ssl", 00:21:26.540 "recv_buf_size": 4096, 00:21:26.540 "send_buf_size": 4096, 00:21:26.540 "enable_recv_pipe": true, 00:21:26.540 "enable_quickack": false, 00:21:26.540 "enable_placement_id": 0, 00:21:26.540 "enable_zerocopy_send_server": true, 00:21:26.540 "enable_zerocopy_send_client": false, 00:21:26.540 "zerocopy_threshold": 0, 00:21:26.540 "tls_version": 0, 00:21:26.540 "enable_ktls": false 00:21:26.540 } 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "method": "sock_impl_set_options", 00:21:26.540 "params": { 00:21:26.540 "impl_name": "posix", 00:21:26.540 "recv_buf_size": 2097152, 00:21:26.540 "send_buf_size": 2097152, 00:21:26.540 "enable_recv_pipe": true, 00:21:26.540 "enable_quickack": false, 00:21:26.540 "enable_placement_id": 0, 00:21:26.540 "enable_zerocopy_send_server": true, 00:21:26.540 "enable_zerocopy_send_client": false, 00:21:26.540 "zerocopy_threshold": 0, 00:21:26.540 "tls_version": 0, 00:21:26.540 "enable_ktls": false 00:21:26.540 } 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "method": "sock_impl_set_options", 00:21:26.540 "params": { 00:21:26.540 "impl_name": "uring", 00:21:26.540 "recv_buf_size": 2097152, 00:21:26.540 "send_buf_size": 2097152, 00:21:26.540 "enable_recv_pipe": true, 00:21:26.540 "enable_quickack": false, 00:21:26.540 "enable_placement_id": 0, 00:21:26.540 "enable_zerocopy_send_server": false, 00:21:26.540 "enable_zerocopy_send_client": false, 00:21:26.540 "zerocopy_threshold": 0, 00:21:26.540 "tls_version": 0, 00:21:26.540 "enable_ktls": false 00:21:26.540 } 00:21:26.540 } 00:21:26.540 ] 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "subsystem": "vmd", 00:21:26.540 "config": [] 00:21:26.540 }, 00:21:26.540 { 00:21:26.540 "subsystem": "accel", 00:21:26.540 "config": [ 00:21:26.541 { 00:21:26.541 "method": "accel_set_options", 00:21:26.541 "params": { 00:21:26.541 "small_cache_size": 128, 00:21:26.541 "large_cache_size": 16, 00:21:26.541 "task_count": 2048, 00:21:26.541 "sequence_count": 2048, 00:21:26.541 "buf_count": 2048 00:21:26.541 } 00:21:26.541 } 00:21:26.541 ] 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "subsystem": "bdev", 00:21:26.541 "config": [ 00:21:26.541 { 00:21:26.541 "method": "bdev_set_options", 00:21:26.541 "params": { 00:21:26.541 "bdev_io_pool_size": 65535, 00:21:26.541 "bdev_io_cache_size": 256, 00:21:26.541 "bdev_auto_examine": true, 00:21:26.541 "iobuf_small_cache_size": 128, 00:21:26.541 "iobuf_large_cache_size": 16 00:21:26.541 } 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "method": "bdev_raid_set_options", 00:21:26.541 "params": { 00:21:26.541 "process_window_size_kb": 1024 00:21:26.541 } 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "method": "bdev_iscsi_set_options", 00:21:26.541 "params": { 00:21:26.541 "timeout_sec": 30 00:21:26.541 } 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "method": "bdev_nvme_set_options", 00:21:26.541 "params": { 00:21:26.541 "action_on_timeout": "none", 00:21:26.541 "timeout_us": 0, 00:21:26.541 "timeout_admin_us": 0, 00:21:26.541 "keep_alive_timeout_ms": 10000, 00:21:26.541 "arbitration_burst": 0, 00:21:26.541 "low_priority_weight": 0, 00:21:26.541 "medium_priority_weight": 0, 00:21:26.541 "high_priority_weight": 0, 00:21:26.541 "nvme_adminq_poll_period_us": 10000, 00:21:26.541 "nvme_ioq_poll_period_us": 0, 00:21:26.541 "io_queue_requests": 512, 00:21:26.541 "delay_cmd_submit": true, 00:21:26.541 "transport_retry_count": 4, 00:21:26.541 "bdev_retry_count": 3, 00:21:26.541 "transport_ack_timeout": 0, 00:21:26.541 "ctrlr_loss_timeout_sec": 0, 00:21:26.541 "reconnect_delay_sec": 0, 00:21:26.541 "fast_io_fail_timeout_sec": 0, 00:21:26.541 "disable_auto_failback": false, 00:21:26.541 "generate_uuids": false, 00:21:26.541 "transport_tos": 0, 00:21:26.541 "nvme_error_stat": false, 00:21:26.541 "rdma_srq_size": 0, 00:21:26.541 "io_path_stat": false, 00:21:26.541 "allow_accel_sequence": false, 00:21:26.541 "rdma_max_cq_size": 0, 00:21:26.541 "rdma_cm_event_timeout_ms": 0, 00:21:26.541 "dhchap_digests": [ 00:21:26.541 "sha256", 00:21:26.541 "sha384", 00:21:26.541 "sha512" 00:21:26.541 ], 00:21:26.541 "dhchap_dhgroups": [ 00:21:26.541 "null", 00:21:26.541 "ffdhe2048", 00:21:26.541 "ffdhe3072", 00:21:26.541 "ffdhe4096", 00:21:26.541 "ffdhe6144", 00:21:26.541 "ffdhe8192" 00:21:26.541 ] 00:21:26.541 } 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "method": "bdev_nvme_attach_controller", 00:21:26.541 "params": { 00:21:26.541 "name": "nvme0", 00:21:26.541 "trtype": "TCP", 00:21:26.541 "adrfam": "IPv4", 00:21:26.541 "traddr": "127.0.0.1", 00:21:26.541 "trsvcid": "4420", 00:21:26.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.541 "prchk_reftag": false, 00:21:26.541 "prchk_guard": false, 00:21:26.541 "ctrlr_loss_timeout_sec": 0, 00:21:26.541 "reconnect_delay_sec": 0, 00:21:26.541 "fast_io_fail_timeout_sec": 0, 00:21:26.541 "psk": "key0", 00:21:26.541 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:26.541 "hdgst": false, 00:21:26.541 "ddgst": false 00:21:26.541 } 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "method": "bdev_nvme_set_hotplug", 00:21:26.541 "params": { 00:21:26.541 "period_us": 100000, 00:21:26.541 "enable": false 00:21:26.541 } 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "method": "bdev_wait_for_examine" 00:21:26.541 } 00:21:26.541 ] 00:21:26.541 }, 00:21:26.541 { 00:21:26.541 "subsystem": "nbd", 00:21:26.541 "config": [] 00:21:26.541 } 00:21:26.541 ] 00:21:26.541 }' 00:21:26.541 09:47:20 keyring_file -- keyring/file.sh@114 -- # killprocess 85504 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85504 ']' 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85504 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85504 00:21:26.541 killing process with pid 85504 00:21:26.541 Received shutdown signal, test time was about 1.000000 seconds 00:21:26.541 00:21:26.541 Latency(us) 00:21:26.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.541 =================================================================================================================== 00:21:26.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85504' 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@967 -- # kill 85504 00:21:26.541 09:47:20 keyring_file -- common/autotest_common.sh@972 -- # wait 85504 00:21:26.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.810 09:47:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=85754 00:21:26.810 09:47:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85754 /var/tmp/bperf.sock 00:21:26.810 09:47:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85754 ']' 00:21:26.810 09:47:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.810 09:47:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.810 09:47:21 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:26.811 09:47:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.811 09:47:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.811 09:47:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:26.811 "subsystems": [ 00:21:26.811 { 00:21:26.811 "subsystem": "keyring", 00:21:26.811 "config": [ 00:21:26.811 { 00:21:26.811 "method": "keyring_file_add_key", 00:21:26.811 "params": { 00:21:26.811 "name": "key0", 00:21:26.811 "path": "/tmp/tmp.dGQZZBJvaj" 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "keyring_file_add_key", 00:21:26.811 "params": { 00:21:26.811 "name": "key1", 00:21:26.811 "path": "/tmp/tmp.5BEnHiI8in" 00:21:26.811 } 00:21:26.811 } 00:21:26.811 ] 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "subsystem": "iobuf", 00:21:26.811 "config": [ 00:21:26.811 { 00:21:26.811 "method": "iobuf_set_options", 00:21:26.811 "params": { 00:21:26.811 "small_pool_count": 8192, 00:21:26.811 "large_pool_count": 1024, 00:21:26.811 "small_bufsize": 8192, 00:21:26.811 "large_bufsize": 135168 00:21:26.811 } 00:21:26.811 } 00:21:26.811 ] 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "subsystem": "sock", 00:21:26.811 "config": [ 00:21:26.811 { 00:21:26.811 "method": "sock_set_default_impl", 00:21:26.811 "params": { 00:21:26.811 "impl_name": "uring" 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "sock_impl_set_options", 00:21:26.811 "params": { 00:21:26.811 "impl_name": "ssl", 00:21:26.811 "recv_buf_size": 4096, 00:21:26.811 "send_buf_size": 4096, 00:21:26.811 "enable_recv_pipe": true, 00:21:26.811 "enable_quickack": false, 00:21:26.811 "enable_placement_id": 0, 00:21:26.811 "enable_zerocopy_send_server": true, 00:21:26.811 "enable_zerocopy_send_client": false, 00:21:26.811 "zerocopy_threshold": 0, 00:21:26.811 "tls_version": 0, 00:21:26.811 "enable_ktls": false 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "sock_impl_set_options", 00:21:26.811 "params": { 00:21:26.811 "impl_name": "posix", 00:21:26.811 "recv_buf_size": 2097152, 00:21:26.811 "send_buf_size": 2097152, 00:21:26.811 "enable_recv_pipe": true, 00:21:26.811 "enable_quickack": false, 00:21:26.811 "enable_placement_id": 0, 00:21:26.811 "enable_zerocopy_send_server": true, 00:21:26.811 "enable_zerocopy_send_client": false, 00:21:26.811 "zerocopy_threshold": 0, 00:21:26.811 "tls_version": 0, 00:21:26.811 "enable_ktls": false 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "sock_impl_set_options", 00:21:26.811 "params": { 00:21:26.811 "impl_name": "uring", 00:21:26.811 "recv_buf_size": 2097152, 00:21:26.811 "send_buf_size": 2097152, 00:21:26.811 "enable_recv_pipe": true, 00:21:26.811 "enable_quickack": false, 00:21:26.811 "enable_placement_id": 0, 00:21:26.811 "enable_zerocopy_send_server": false, 00:21:26.811 "enable_zerocopy_send_client": false, 00:21:26.811 "zerocopy_threshold": 0, 00:21:26.811 "tls_version": 0, 00:21:26.811 "enable_ktls": false 00:21:26.811 } 00:21:26.811 } 00:21:26.811 ] 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "subsystem": "vmd", 00:21:26.811 "config": [] 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "subsystem": "accel", 00:21:26.811 "config": [ 00:21:26.811 { 00:21:26.811 "method": "accel_set_options", 00:21:26.811 "params": { 00:21:26.811 "small_cache_size": 128, 00:21:26.811 "large_cache_size": 16, 00:21:26.811 "task_count": 2048, 00:21:26.811 "sequence_count": 2048, 00:21:26.811 "buf_count": 2048 00:21:26.811 } 00:21:26.811 } 00:21:26.811 ] 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "subsystem": "bdev", 00:21:26.811 "config": [ 00:21:26.811 { 00:21:26.811 "method": "bdev_set_options", 00:21:26.811 "params": { 00:21:26.811 "bdev_io_pool_size": 65535, 00:21:26.811 "bdev_io_cache_size": 256, 00:21:26.811 "bdev_auto_examine": true, 00:21:26.811 "iobuf_small_cache_size": 128, 00:21:26.811 "iobuf_large_cache_size": 16 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "bdev_raid_set_options", 00:21:26.811 "params": { 00:21:26.811 "process_window_size_kb": 1024 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "bdev_iscsi_set_options", 00:21:26.811 "params": { 00:21:26.811 "timeout_sec": 30 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "bdev_nvme_set_options", 00:21:26.811 "params": { 00:21:26.811 "action_on_timeout": "none", 00:21:26.811 "timeout_us": 0, 00:21:26.811 "timeout_admin_us": 0, 00:21:26.811 "keep_alive_timeout_ms": 10000, 00:21:26.811 "arbitration_burst": 0, 00:21:26.811 "low_priority_weight": 0, 00:21:26.811 "medium_priority_weight": 0, 00:21:26.811 "high_priority_weight": 0, 00:21:26.811 "nvme_adminq_poll_period_us": 10000, 00:21:26.811 "nvme_ioq_poll_period_us": 0, 00:21:26.811 "io_queue_requests": 512, 00:21:26.811 "delay_cmd_submit": true, 00:21:26.811 "transport_retry_count": 4, 00:21:26.811 "bdev_retry_count": 3, 00:21:26.811 "transport_ack_timeout": 0, 00:21:26.811 "ctrlr_loss_timeout_sec": 0, 00:21:26.811 "reconnect_delay_sec": 0, 00:21:26.811 "fast_io_fail_timeout_sec": 0, 00:21:26.811 "disable_auto_failback": false, 00:21:26.811 "generate_uuids": false, 00:21:26.811 "transport_tos": 0, 00:21:26.811 "nvme_error_stat": false, 00:21:26.811 "rdma_srq_size": 0, 00:21:26.811 "io_path_stat": false, 00:21:26.811 "allow_accel_sequence": false, 00:21:26.811 "rdma_max_cq_size": 0, 00:21:26.811 "rdma_cm_event_timeout_ms": 0, 00:21:26.811 "dhchap_digests": [ 00:21:26.811 "sha256", 00:21:26.811 "sha384", 00:21:26.811 "sha512" 00:21:26.811 ], 00:21:26.811 "dhchap_dhgroups": [ 00:21:26.811 "null", 00:21:26.811 "ffdhe2048", 00:21:26.811 "ffdhe3072", 00:21:26.811 "ffdhe4096", 00:21:26.811 "ffdhe6144", 00:21:26.811 "ffdhe8192" 00:21:26.811 ] 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "bdev_nvme_attach_controller", 00:21:26.811 "params": { 00:21:26.811 "name": "nvme0", 00:21:26.811 "trtype": "TCP", 00:21:26.811 "adrfam": "IPv4", 00:21:26.811 "traddr": "127.0.0.1", 00:21:26.811 "trsvcid": "4420", 00:21:26.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:26.811 "prchk_reftag": false, 00:21:26.811 "prchk_guard": false, 00:21:26.811 "ctrlr_loss_timeout_sec": 0, 00:21:26.811 "reconnect_delay_sec": 0, 00:21:26.811 "fast_io_fail_timeout_sec": 0, 00:21:26.811 "psk": "key0", 00:21:26.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:26.811 "hdgst": false, 00:21:26.811 "ddgst": false 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "bdev_nvme_set_hotplug", 00:21:26.811 "params": { 00:21:26.811 "period_us": 100000, 00:21:26.811 "enable": false 00:21:26.811 } 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "method": "bdev_wait_for_examine" 00:21:26.811 } 00:21:26.811 ] 00:21:26.811 }, 00:21:26.811 { 00:21:26.811 "subsystem": "nbd", 00:21:26.811 "config": [] 00:21:26.811 } 00:21:26.811 ] 00:21:26.811 }' 00:21:26.811 09:47:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:26.811 [2024-07-15 09:47:21.071640] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:26.811 [2024-07-15 09:47:21.071753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85754 ] 00:21:26.811 [2024-07-15 09:47:21.208747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.070 [2024-07-15 09:47:21.330332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.070 [2024-07-15 09:47:21.465503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:27.070 [2024-07-15 09:47:21.520714] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.636 09:47:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.636 09:47:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:27.636 09:47:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:27.636 09:47:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:27.636 09:47:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.202 09:47:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:28.202 09:47:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:28.202 09:47:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.202 09:47:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:28.202 09:47:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.202 09:47:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.202 09:47:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:28.459 09:47:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:28.459 09:47:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:28.459 09:47:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:28.459 09:47:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:28.459 09:47:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:28.459 09:47:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:28.459 09:47:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:28.717 09:47:22 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:28.717 09:47:22 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:28.717 09:47:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:28.717 09:47:22 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:28.974 09:47:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:28.974 09:47:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:28.974 09:47:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dGQZZBJvaj /tmp/tmp.5BEnHiI8in 00:21:28.974 09:47:23 keyring_file -- keyring/file.sh@20 -- # killprocess 85754 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85754 ']' 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85754 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85754 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:28.974 killing process with pid 85754 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85754' 00:21:28.974 Received shutdown signal, test time was about 1.000000 seconds 00:21:28.974 00:21:28.974 Latency(us) 00:21:28.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.974 =================================================================================================================== 00:21:28.974 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@967 -- # kill 85754 00:21:28.974 09:47:23 keyring_file -- common/autotest_common.sh@972 -- # wait 85754 00:21:29.231 09:47:23 keyring_file -- keyring/file.sh@21 -- # killprocess 85487 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85487 ']' 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85487 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85487 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85487' 00:21:29.231 killing process with pid 85487 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@967 -- # kill 85487 00:21:29.231 [2024-07-15 09:47:23.471005] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:29.231 09:47:23 keyring_file -- common/autotest_common.sh@972 -- # wait 85487 00:21:29.489 00:21:29.489 real 0m16.033s 00:21:29.489 user 0m40.106s 00:21:29.489 sys 0m3.082s 00:21:29.489 09:47:23 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.489 ************************************ 00:21:29.489 END TEST keyring_file 00:21:29.489 09:47:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:29.489 ************************************ 00:21:29.489 09:47:23 -- common/autotest_common.sh@1142 -- # return 0 00:21:29.489 09:47:23 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:21:29.489 09:47:23 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:29.489 09:47:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:29.489 09:47:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.489 09:47:23 -- common/autotest_common.sh@10 -- # set +x 00:21:29.489 ************************************ 00:21:29.489 START TEST keyring_linux 00:21:29.489 ************************************ 00:21:29.489 09:47:23 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:29.748 * Looking for test storage... 00:21:29.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:29.748 09:47:23 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:29.748 09:47:23 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.748 09:47:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d2f81337-7559-423d-93ce-5836d202b6da 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=d2f81337-7559-423d-93ce-5836d202b6da 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:29.748 09:47:24 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.748 09:47:24 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.748 09:47:24 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.748 09:47:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.748 09:47:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.748 09:47:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.748 09:47:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:29.748 09:47:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:29.748 /tmp/:spdk-test:key0 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:29.748 09:47:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:29.748 /tmp/:spdk-test:key1 00:21:29.748 09:47:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85868 00:21:29.748 09:47:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85868 00:21:29.749 09:47:24 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85868 ']' 00:21:29.749 09:47:24 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:29.749 09:47:24 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.749 09:47:24 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.749 09:47:24 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.749 09:47:24 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.749 09:47:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:29.749 [2024-07-15 09:47:24.171655] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:29.749 [2024-07-15 09:47:24.171756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85868 ] 00:21:30.007 [2024-07-15 09:47:24.305594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.007 [2024-07-15 09:47:24.433391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.264 [2024-07-15 09:47:24.489984] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:30.827 09:47:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:30.827 [2024-07-15 09:47:25.092414] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.827 null0 00:21:30.827 [2024-07-15 09:47:25.124347] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.827 [2024-07-15 09:47:25.124569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.827 09:47:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:30.827 754609592 00:21:30.827 09:47:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:30.827 1070689383 00:21:30.827 09:47:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85886 00:21:30.827 09:47:25 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:30.827 09:47:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85886 /var/tmp/bperf.sock 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85886 ']' 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.827 09:47:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:30.827 [2024-07-15 09:47:25.199925] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:30.827 [2024-07-15 09:47:25.200013] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85886 ] 00:21:31.085 [2024-07-15 09:47:25.333753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.085 [2024-07-15 09:47:25.451328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.017 09:47:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.017 09:47:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:32.017 09:47:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:32.017 09:47:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:32.275 09:47:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:32.275 09:47:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:32.532 [2024-07-15 09:47:26.752061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:32.532 09:47:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:32.532 09:47:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:32.790 [2024-07-15 09:47:27.052394] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.790 nvme0n1 00:21:32.790 09:47:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:32.790 09:47:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:32.790 09:47:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:32.790 09:47:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:32.790 09:47:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:32.790 09:47:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:33.047 09:47:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:33.047 09:47:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:33.047 09:47:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:33.048 09:47:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:33.048 09:47:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:33.048 09:47:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.048 09:47:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.306 09:47:27 keyring_linux -- keyring/linux.sh@25 -- # sn=754609592 00:21:33.306 09:47:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:33.306 09:47:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:33.306 09:47:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 754609592 == \7\5\4\6\0\9\5\9\2 ]] 00:21:33.306 09:47:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 754609592 00:21:33.306 09:47:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:33.306 09:47:27 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:33.564 Running I/O for 1 seconds... 00:21:34.499 00:21:34.499 Latency(us) 00:21:34.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.499 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:34.499 nvme0n1 : 1.01 11608.91 45.35 0.00 0.00 10958.34 8817.57 18350.08 00:21:34.499 =================================================================================================================== 00:21:34.499 Total : 11608.91 45.35 0.00 0.00 10958.34 8817.57 18350.08 00:21:34.499 0 00:21:34.499 09:47:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:34.499 09:47:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:34.757 09:47:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:34.757 09:47:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:34.757 09:47:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:34.757 09:47:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:34.757 09:47:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:34.757 09:47:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.028 09:47:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:35.028 09:47:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:35.028 09:47:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:35.028 09:47:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:35.029 09:47:29 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:21:35.029 09:47:29 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:35.029 09:47:29 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:35.029 09:47:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.029 09:47:29 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:35.029 09:47:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.029 09:47:29 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:35.029 09:47:29 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:35.287 [2024-07-15 09:47:29.558946] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:35.287 [2024-07-15 09:47:29.559055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16af460 (107): Transport endpoint is not connected 00:21:35.287 [2024-07-15 09:47:29.560045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16af460 (9): Bad file descriptor 00:21:35.287 [2024-07-15 09:47:29.561038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:35.287 [2024-07-15 09:47:29.561079] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:35.287 [2024-07-15 09:47:29.561099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:35.287 request: 00:21:35.287 { 00:21:35.287 "name": "nvme0", 00:21:35.287 "trtype": "tcp", 00:21:35.287 "traddr": "127.0.0.1", 00:21:35.287 "adrfam": "ipv4", 00:21:35.287 "trsvcid": "4420", 00:21:35.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:35.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:35.287 "prchk_reftag": false, 00:21:35.287 "prchk_guard": false, 00:21:35.287 "hdgst": false, 00:21:35.287 "ddgst": false, 00:21:35.287 "psk": ":spdk-test:key1", 00:21:35.287 "method": "bdev_nvme_attach_controller", 00:21:35.287 "req_id": 1 00:21:35.287 } 00:21:35.287 Got JSON-RPC error response 00:21:35.287 response: 00:21:35.287 { 00:21:35.287 "code": -5, 00:21:35.287 "message": "Input/output error" 00:21:35.287 } 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@33 -- # sn=754609592 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 754609592 00:21:35.287 1 links removed 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@33 -- # sn=1070689383 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1070689383 00:21:35.287 1 links removed 00:21:35.287 09:47:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85886 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85886 ']' 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85886 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85886 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:35.287 killing process with pid 85886 00:21:35.287 Received shutdown signal, test time was about 1.000000 seconds 00:21:35.287 00:21:35.287 Latency(us) 00:21:35.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.287 =================================================================================================================== 00:21:35.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85886' 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 85886 00:21:35.287 09:47:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 85886 00:21:35.546 09:47:29 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85868 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85868 ']' 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85868 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85868 00:21:35.547 killing process with pid 85868 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85868' 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 85868 00:21:35.547 09:47:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 85868 00:21:35.806 ************************************ 00:21:35.806 END TEST keyring_linux 00:21:35.806 ************************************ 00:21:35.806 00:21:35.806 real 0m6.335s 00:21:35.806 user 0m12.389s 00:21:35.806 sys 0m1.547s 00:21:35.806 09:47:30 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.806 09:47:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:36.064 09:47:30 -- common/autotest_common.sh@1142 -- # return 0 00:21:36.064 09:47:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:36.064 09:47:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:36.064 09:47:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:36.064 09:47:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:36.064 09:47:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:36.064 09:47:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:21:36.064 09:47:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:21:36.064 09:47:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.064 09:47:30 -- common/autotest_common.sh@10 -- # set +x 00:21:36.064 09:47:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:21:36.064 09:47:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:21:36.064 09:47:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:21:36.064 09:47:30 -- common/autotest_common.sh@10 -- # set +x 00:21:37.438 INFO: APP EXITING 00:21:37.438 INFO: killing all VMs 00:21:37.438 INFO: killing vhost app 00:21:37.438 INFO: EXIT DONE 00:21:38.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:38.004 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:38.004 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:38.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:38.571 Cleaning 00:21:38.571 Removing: /var/run/dpdk/spdk0/config 00:21:38.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:38.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:38.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:38.571 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:38.571 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:38.571 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:38.571 Removing: /var/run/dpdk/spdk1/config 00:21:38.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:38.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:38.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:38.571 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:38.571 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:38.830 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:38.830 Removing: /var/run/dpdk/spdk2/config 00:21:38.830 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:38.830 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:38.830 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:38.830 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:38.830 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:38.830 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:38.830 Removing: /var/run/dpdk/spdk3/config 00:21:38.830 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:38.830 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:38.830 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:38.830 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:38.830 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:38.830 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:38.830 Removing: /var/run/dpdk/spdk4/config 00:21:38.830 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:38.830 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:38.830 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:38.830 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:38.830 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:38.830 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:38.830 Removing: /dev/shm/nvmf_trace.0 00:21:38.830 Removing: /dev/shm/spdk_tgt_trace.pid58837 00:21:38.830 Removing: /var/run/dpdk/spdk0 00:21:38.830 Removing: /var/run/dpdk/spdk1 00:21:38.830 Removing: /var/run/dpdk/spdk2 00:21:38.830 Removing: /var/run/dpdk/spdk3 00:21:38.830 Removing: /var/run/dpdk/spdk4 00:21:38.830 Removing: /var/run/dpdk/spdk_pid58692 00:21:38.830 Removing: /var/run/dpdk/spdk_pid58837 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59035 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59116 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59149 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59253 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59271 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59389 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59585 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59731 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59790 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59866 00:21:38.830 Removing: /var/run/dpdk/spdk_pid59957 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60033 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60067 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60103 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60164 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60258 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60691 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60743 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60794 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60810 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60877 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60893 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60960 00:21:38.830 Removing: /var/run/dpdk/spdk_pid60976 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61022 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61032 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61072 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61090 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61218 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61248 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61323 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61374 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61404 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61463 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61497 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61532 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61566 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61601 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61635 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61670 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61704 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61739 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61779 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61808 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61848 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61877 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61917 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61946 00:21:38.830 Removing: /var/run/dpdk/spdk_pid61986 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62017 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62060 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62092 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62132 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62168 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62232 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62327 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62635 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62653 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62684 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62703 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62718 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62743 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62762 00:21:38.830 Removing: /var/run/dpdk/spdk_pid62772 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62802 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62810 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62831 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62850 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62869 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62890 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62910 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62929 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62939 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62968 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62977 00:21:39.088 Removing: /var/run/dpdk/spdk_pid62999 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63035 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63043 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63078 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63142 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63171 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63180 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63214 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63224 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63231 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63275 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63293 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63321 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63331 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63340 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63354 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63365 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63374 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63384 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63399 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63426 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63454 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63469 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63492 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63507 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63520 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63555 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63572 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63604 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63606 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63619 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63632 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63635 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63648 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63656 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63663 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63737 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63789 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63895 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63933 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63979 00:21:39.088 Removing: /var/run/dpdk/spdk_pid63996 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64013 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64033 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64070 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64091 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64161 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64177 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64221 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64299 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64355 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64392 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64479 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64527 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64560 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64785 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64877 00:21:39.088 Removing: /var/run/dpdk/spdk_pid64911 00:21:39.088 Removing: /var/run/dpdk/spdk_pid65229 00:21:39.088 Removing: /var/run/dpdk/spdk_pid65267 00:21:39.088 Removing: /var/run/dpdk/spdk_pid65567 00:21:39.088 Removing: /var/run/dpdk/spdk_pid65976 00:21:39.088 Removing: /var/run/dpdk/spdk_pid66256 00:21:39.088 Removing: /var/run/dpdk/spdk_pid67037 00:21:39.088 Removing: /var/run/dpdk/spdk_pid67862 00:21:39.088 Removing: /var/run/dpdk/spdk_pid67984 00:21:39.088 Removing: /var/run/dpdk/spdk_pid68047 00:21:39.088 Removing: /var/run/dpdk/spdk_pid69318 00:21:39.088 Removing: /var/run/dpdk/spdk_pid69524 00:21:39.088 Removing: /var/run/dpdk/spdk_pid72914 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73220 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73328 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73454 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73476 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73508 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73537 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73629 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73764 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73914 00:21:39.088 Removing: /var/run/dpdk/spdk_pid73989 00:21:39.088 Removing: /var/run/dpdk/spdk_pid74182 00:21:39.088 Removing: /var/run/dpdk/spdk_pid74271 00:21:39.088 Removing: /var/run/dpdk/spdk_pid74365 00:21:39.088 Removing: /var/run/dpdk/spdk_pid74670 00:21:39.088 Removing: /var/run/dpdk/spdk_pid75059 00:21:39.089 Removing: /var/run/dpdk/spdk_pid75062 00:21:39.089 Removing: /var/run/dpdk/spdk_pid75338 00:21:39.089 Removing: /var/run/dpdk/spdk_pid75352 00:21:39.089 Removing: /var/run/dpdk/spdk_pid75372 00:21:39.089 Removing: /var/run/dpdk/spdk_pid75397 00:21:39.346 Removing: /var/run/dpdk/spdk_pid75407 00:21:39.346 Removing: /var/run/dpdk/spdk_pid75707 00:21:39.346 Removing: /var/run/dpdk/spdk_pid75759 00:21:39.346 Removing: /var/run/dpdk/spdk_pid76030 00:21:39.346 Removing: /var/run/dpdk/spdk_pid76232 00:21:39.346 Removing: /var/run/dpdk/spdk_pid76614 00:21:39.346 Removing: /var/run/dpdk/spdk_pid77123 00:21:39.346 Removing: /var/run/dpdk/spdk_pid77945 00:21:39.346 Removing: /var/run/dpdk/spdk_pid78526 00:21:39.346 Removing: /var/run/dpdk/spdk_pid78537 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80434 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80494 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80559 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80615 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80736 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80796 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80851 00:21:39.346 Removing: /var/run/dpdk/spdk_pid80913 00:21:39.346 Removing: /var/run/dpdk/spdk_pid81234 00:21:39.346 Removing: /var/run/dpdk/spdk_pid82399 00:21:39.346 Removing: /var/run/dpdk/spdk_pid82546 00:21:39.346 Removing: /var/run/dpdk/spdk_pid82790 00:21:39.346 Removing: /var/run/dpdk/spdk_pid83337 00:21:39.346 Removing: /var/run/dpdk/spdk_pid83491 00:21:39.346 Removing: /var/run/dpdk/spdk_pid83648 00:21:39.346 Removing: /var/run/dpdk/spdk_pid83746 00:21:39.346 Removing: /var/run/dpdk/spdk_pid83908 00:21:39.346 Removing: /var/run/dpdk/spdk_pid84017 00:21:39.346 Removing: /var/run/dpdk/spdk_pid84670 00:21:39.346 Removing: /var/run/dpdk/spdk_pid84702 00:21:39.346 Removing: /var/run/dpdk/spdk_pid84743 00:21:39.346 Removing: /var/run/dpdk/spdk_pid84999 00:21:39.346 Removing: /var/run/dpdk/spdk_pid85029 00:21:39.346 Removing: /var/run/dpdk/spdk_pid85064 00:21:39.346 Removing: /var/run/dpdk/spdk_pid85487 00:21:39.346 Removing: /var/run/dpdk/spdk_pid85504 00:21:39.346 Removing: /var/run/dpdk/spdk_pid85754 00:21:39.346 Removing: /var/run/dpdk/spdk_pid85868 00:21:39.346 Removing: /var/run/dpdk/spdk_pid85886 00:21:39.346 Clean 00:21:39.346 09:47:33 -- common/autotest_common.sh@1451 -- # return 0 00:21:39.346 09:47:33 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:21:39.346 09:47:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.346 09:47:33 -- common/autotest_common.sh@10 -- # set +x 00:21:39.346 09:47:33 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:21:39.346 09:47:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.346 09:47:33 -- common/autotest_common.sh@10 -- # set +x 00:21:39.346 09:47:33 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:39.346 09:47:33 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:39.346 09:47:33 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:39.346 09:47:33 -- spdk/autotest.sh@391 -- # hash lcov 00:21:39.346 09:47:33 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:39.346 09:47:33 -- spdk/autotest.sh@393 -- # hostname 00:21:39.346 09:47:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:39.604 geninfo: WARNING: invalid characters removed from testname! 00:22:06.204 09:47:59 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.582 09:48:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.110 09:48:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:14.642 09:48:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.927 09:48:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.459 09:48:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:22.991 09:48:17 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:22.991 09:48:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.991 09:48:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:22.991 09:48:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.991 09:48:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.991 09:48:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.991 09:48:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.991 09:48:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.991 09:48:17 -- paths/export.sh@5 -- $ export PATH 00:22:22.991 09:48:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.991 09:48:17 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:22.991 09:48:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:22.991 09:48:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721036897.XXXXXX 00:22:22.991 09:48:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721036897.9uSFTD 00:22:22.991 09:48:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:22.991 09:48:17 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:22.991 09:48:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:22.991 09:48:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:22.991 09:48:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:22.991 09:48:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:22.991 09:48:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:22.991 09:48:17 -- common/autotest_common.sh@10 -- $ set +x 00:22:22.991 09:48:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:22.991 09:48:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:22.991 09:48:17 -- pm/common@17 -- $ local monitor 00:22:22.991 09:48:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:22.991 09:48:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:22.991 09:48:17 -- pm/common@21 -- $ date +%s 00:22:22.991 09:48:17 -- pm/common@25 -- $ sleep 1 00:22:22.991 09:48:17 -- pm/common@21 -- $ date +%s 00:22:22.991 09:48:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721036897 00:22:22.991 09:48:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721036897 00:22:22.991 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721036897_collect-vmstat.pm.log 00:22:22.991 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721036897_collect-cpu-load.pm.log 00:22:23.925 09:48:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:23.925 09:48:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:23.925 09:48:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:23.925 09:48:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:23.925 09:48:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:23.925 09:48:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:23.925 09:48:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:23.925 09:48:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:23.925 09:48:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:24.183 09:48:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:24.183 09:48:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:24.183 09:48:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:24.183 09:48:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:24.183 09:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:24.183 09:48:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:24.183 09:48:18 -- pm/common@44 -- $ pid=87618 00:22:24.183 09:48:18 -- pm/common@50 -- $ kill -TERM 87618 00:22:24.183 09:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:24.183 09:48:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:24.183 09:48:18 -- pm/common@44 -- $ pid=87620 00:22:24.183 09:48:18 -- pm/common@50 -- $ kill -TERM 87620 00:22:24.183 + [[ -n 5261 ]] 00:22:24.183 + sudo kill 5261 00:22:24.194 [Pipeline] } 00:22:24.215 [Pipeline] // timeout 00:22:24.222 [Pipeline] } 00:22:24.241 [Pipeline] // stage 00:22:24.248 [Pipeline] } 00:22:24.269 [Pipeline] // catchError 00:22:24.279 [Pipeline] stage 00:22:24.282 [Pipeline] { (Stop VM) 00:22:24.298 [Pipeline] sh 00:22:24.618 + vagrant halt 00:22:28.804 ==> default: Halting domain... 00:22:35.370 [Pipeline] sh 00:22:35.642 + vagrant destroy -f 00:22:39.825 ==> default: Removing domain... 00:22:39.837 [Pipeline] sh 00:22:40.115 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:40.123 [Pipeline] } 00:22:40.141 [Pipeline] // stage 00:22:40.146 [Pipeline] } 00:22:40.162 [Pipeline] // dir 00:22:40.168 [Pipeline] } 00:22:40.183 [Pipeline] // wrap 00:22:40.190 [Pipeline] } 00:22:40.204 [Pipeline] // catchError 00:22:40.211 [Pipeline] stage 00:22:40.213 [Pipeline] { (Epilogue) 00:22:40.228 [Pipeline] sh 00:22:40.505 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:47.085 [Pipeline] catchError 00:22:47.087 [Pipeline] { 00:22:47.101 [Pipeline] sh 00:22:47.379 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:47.379 Artifacts sizes are good 00:22:47.388 [Pipeline] } 00:22:47.409 [Pipeline] // catchError 00:22:47.422 [Pipeline] archiveArtifacts 00:22:47.429 Archiving artifacts 00:22:47.615 [Pipeline] cleanWs 00:22:47.625 [WS-CLEANUP] Deleting project workspace... 00:22:47.625 [WS-CLEANUP] Deferred wipeout is used... 00:22:47.630 [WS-CLEANUP] done 00:22:47.632 [Pipeline] } 00:22:47.647 [Pipeline] // stage 00:22:47.652 [Pipeline] } 00:22:47.669 [Pipeline] // node 00:22:47.677 [Pipeline] End of Pipeline 00:22:47.713 Finished: SUCCESS